patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11858493 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Embodiments described in this specification are intended to clearly explain the spirit of the invention to those skilled in the art. Therefore, the present invention is not limited by the embodiments, and the scope of the present invention should be interpreted as encompassing modifications and variations without departing from the spirit of the invention. Terms used in this specification are selected from among general terms, which are currently widely used, in consideration of functions in the present invention and may have meanings varying depending on intentions of those skilled in the art, customs in the field of the art, the emergence of new technologies, or the like. If a specific term is used with a specific meaning, the meaning of the term will be described specifically. Accordingly, the terms used in this specification should not be defined as simple names of the components but should be defined on the basis of the actual meaning of the terms and the whole context throughout the present specification. The accompanying drawings are to facilitate the explanation of the present invention, and the shape in the drawings may be exaggerated for the purpose of convenience of explanation, so the present invention should not be limited by the drawings. When it is determined that detailed descriptions of well-known elements or functions related to the present invention may obscure the subject matter of the present invention, detailed descriptions thereof will be omitted herein as necessary. According to one embodiment, there is provided a method of sharing sensor data of a first device with a second device, the method including obtaining, by a controller of the first device, a set of point data from at least one of a sensors located in the first device, wherein the set of point data includes a first subset of point data representing at least a portion of a first object, generating, by the controller, a first property data of the first subset of point data based on the first subset of point data, wherein the first property data includes a class information of the first subset of point data, generating, by the controller, a sharing data including at least a portion of the first subset of point data and the first property data; and transmitting, by the controller, the sharing data to the second device; wherein if a class of a first object included in the class information a class in which personal information must be protected, a content of the sharing data includes a privacy protection data in which at least a portion of the first subset of point data is processed such that personal information of the first object does not identified by the second device. In some embodiments, the class in which personal information must be protected includes one of a class related to human, a class related to identification number of a vehicle or a building, or a class related to ID. In some embodiments, the class information of the first subset of point data includes at least one of a information about a type of the first object, a information about a type of a portion of the first object, or a information about a situation of a region related to the first object. In some embodiments, the first property data of the first subset of point data includes at least one of a class information of the first object, a center position information representing a center position of the first subset of point data, a size information representing a size of the first subset of point data, a movement information including at least one of a velocity or a direction of the first subset of point data, or a shape information represented by processing the shape of the first object. In some embodiments, the content of the sharing data includes at least one of information included in the first property data regardless of a type of class included in the class information of the first subset of point data. In some embodiments, the shape information is determined based on the class information off the first subset of point data, and wherein the shape information includes at least one of a skeleton information indicated by points less than a predetermined number or at least one of line, and template information in which the first object is represented in a predetermined shape. In some embodiments, the privacy protection data includes at least portion of information included in the first property data, and wherein the first property data includes a shape information represented by processing the shape of the first object. In some embodiments, the first subset of point data includes a multiple pieces of point data, and wherein the privacy protection data is generated based on at least one of the multiple pieces of point data corresponding to a region related to privacy of the first object. In some embodiments, the set of point data includes a second subset of point data representing at least portion of a second object, and wherein if a class of the first object included in the class information of the first subset of point data is a class in which personal information must be protected and a class of the second object included in the class information of the second subset of point data is not a class in which personal information must be protected, the content of the sharing data includes a privacy protection data in which at least a portion of the first subset of point data is processed, and the content of the sharing data includes at least portion of a second property data of the second subset of point data. In some embodiments, the set of point data includes a second subset of point data representing at least portion of a second object, wherein if obtaining a approval about sharing of at least one of the second subset of point data or a second property data, and wherein the second property data includes a class information of the second subset of point data, the content of the sharing data includes at least one of the second subset of point data or the second property data regardless of a type of the second object's class included in the class information of the second subset of point data. In some embodiments, the set of point data includes a multiple pieces of point data, wherein the multiple pieces of point data generated based on at least one of a distance to an object measured by at least one of the sensors disposed to the first device or a reflectance of the object, wherein the controller generates the first subset of point data based on the multiple pieces of point data of the first object which located within a predetermined distance from the first device, wherein the controller generates a second subset of point data based on the multiple pieces of the second object which located farther than the predetermined distance from the first device, and wherein the content of the sharing data includes the second subset of point data regardless of a property data of the second subset of point data. According to another embodiment, there is provided a method of sharing sensor data of a first device with a second device, the method including obtaining, by a controller of the first device, a set of point data from at least one of a sensors located in the first device, wherein the set of point data includes a first subset of point data representing at least a portion of a first object, generating, by the controller, a first property data of the first subset of point data based on the first subset of point data, wherein the first property data includes a class information of the first subset of point data, generating a sharing data for sharing with the second device using at least one of the first subset of point data and the first property data; and wherein whether a content of the sharing data for sharing with the second device includes at least one of the first subset of point data or the first property data is determined based on at least one of a movability of a the first object's class and a type of the first object's class. In some embodiments, if the first object's class included in the class information of the first subset of point data is related to an immovable object, the content of the sharing data for sharing with the second device includes the first subset of point data, and the method includes transmitting the sharing data to the second device. In some embodiments, the content of the sharing data includes at least one of a plurality of pieces of information included in the first property data of the first subset of point data. In some embodiments, if the controller obtains an additional information related to whether the immovable object is movable after certain time, the content of the sharing data includes the additional information. In some embodiments, if the first object's class included in the class information of the first subset of point data is related to a movable object, the content of the sharing data does not include the first subset of point data. In some embodiments, the content of the sharing data includes at least one of a plurality of pieces of information included in the first property data of the first subset of point data, the method includes transmitting the sharing data to the second device. In some embodiments, the class information of the first subset of point data includes at least one of an information about a type of the first object, an information about a type of a portion of the first object, or an information about a situation of a region related to the first object. In some embodiments, the set of point data includes a second subset of point data representing at least portion of a second object, wherein the second object located in a region separated by a predetermined distance from the first object, wherein the situation of the region related to the first object is determined based on the first subset of point data and the second subset of point data, wherein if the class information of the first subset of point data and a class information of the second subset of point data include an information about the situation of the region related to the first object, the controller obtains an additional information related to an end time of the situation of the region related to the first object, and wherein the content of the sharing data includes the additional information. In some embodiments, if the first object's class included in the class information of the first subset of point data is related to an immovable object, the content of the sharing data does not include the first subset of point data, and wherein if the first object's class included in the class information of the first subset of point data is related to a movable object, the content of the sharing data includes the first subset of point data. In some embodiments, at least one of the sensors includes at least one of a LiDAR, a camera, a radar and an ultrasonic sensor. In some embodiments, each of the first device and the second device includes at least one of a moving object, a server, a mobile device, or an infrastructure device. In some embodiments, there is provided a computer-readable recording medium having a program recorded thereon to perform the above-described vehicle control method and path generation method. According to still another embodiment, there is provided a method of sharing sensor data between a first device and a second device, the method including obtaining, by a controller included in the first device, a set of point data from at least one of a sensors located in the first device, wherein the set of point data includes a plurality of subset of point data, determining, by the controller, a property data of the subset of point data based on the subset of point data, generating, by the controller, a first sharing data for sharing with the second device based on the property data, transmitting, by the controller, the sharing data to the second device, wherein a content of the sharing data includes at least one of a plurality of pieces of information included in the property data, identifying, by the controller, an occurrence of an event at a first time point and generating, by the controller, according to identifying the event, a second sharing data different from the first sharing data, and wherein a content of the second sharing data includes at least a portion of the set of point data obtained within a first time period including the first time point. In some embodiments, the method is configured to transmit the second sharing data to the second device. In some embodiments, if receiving a request information requesting to share the second sharing data from at least one of the second device or a third device, in response to receiving the request information, the method being configured to transmit the second sharing data to a device transmitting the request information. In some embodiments, if receiving a request information from at least one of the second device or a third device requesting to share the second sharing data to a fourth device, in response to receiving the request information, the method being configured to transmit the second sharing data to the fourth device. In some embodiments, identifying the event comprise obtaining an information indicating the occurrence of the event from at least one of the second device or a third device. In some embodiments, identifying the event comprise identifying the occurrence of the event based on at least a portion of the set of point data, the plurality of subset of point data or the property data of the subset of point data. In some embodiments, the request information includes an information indicating the occurrence of the event, and wherein identifying the event comprise identifying the occurrence of the event based on the information indicating the occurrence of the event. In some embodiments, one of the plurality of subset of point data represents at least a portion of an object related to the event. In some embodiments, the event includes at least one of a traffic-event related to at least one of accident related to the first device or accident related to another device around the first device, an environment event related to environment around the first device, and a regulatory event related to regulatory about the first device or another device around the first device. In some embodiments, the first time point includes at least one of a time point at which the event identified or a time point at which the event occurred. In some embodiments, a content of the second sharing data includes at least a portion of the content of the first sharing data. In some embodiments, the second sharing data is generated based on a plurality of set of point data obtained during the first time period, when the second sharing data is generated at regular intervals, transmitting the second sharing data to the second device whenever the second sharing data is generated, or when the second sharing data is generated after the end of the first time period, transmitting the second sharing data to the second device after the second sharing data is generated. In some embodiments, the first time period includes a time point at which the event occurred. In some embodiments, the first time period includes a second time point at which the event ends. According to still another embodiment, there is provided a method of sharing sensor data between a first device and a second device, the method including obtaining, by a controller included in the first device, a set of point data included in a sensor data from at least one of a sensors, wherein the set of point data includes subset of point data representing at least a portion of an object determining, by the controller, a property data of the subset of point data based on the subset of point data, generating, by the controller, a first sharing data for sharing with the second device based on the property data, transmitting, by the controller, the first sharing data to the second device, wherein a content of the first sharing data includes at least one of a plurality of pieces of information included in the property data to the second device, identifying, by the controller, occurrence of an event at a first time point and generating, by the controller, according to identifying the event, a second sharing data different from the first sharing data, and wherein a content of the second sharing data includes at least a portion of the set of point data obtained within a first time period including the first time point. According to still another embodiment, there is provided a method of working of a server, the method including identifying an event occurred in a first region at a first time, transmitting a first message to request a sensor data to a first device located within a first range from the first region, wherein the first message includes a time information of the event, wherein the time information is related to the first time in order to obtain the sensor data obtained within a time period related to the first time, transmitting a second message to notify the event to a second device located within a second range representing a predetermined region outside the first range, wherein the second message includes a location information of the event, wherein the location information is related to the first region such that the event is identified by the second device and receiving at least a portion of set of point data obtained within a first time period including the first time in response to the first message, and wherein the set of point data is obtained from at least one of sensors located in the first device. In some embodiments, the event includes at least one of a traffic-event related to at least one of accident related to the first device or accident related to another device around the first device, an environment event related to environment around the first device, and a regulatory event related to regulatory about the first device or another device around the first device. In some embodiments, when the first device is located in a first sub range, the set of point data obtained from at least one of sensors located in the first device includes a subset of point data representing at least a portion of an object related to the event, and wherein the first sub range represents an area in which information related to the event can be obtained within the first range. In some embodiments, the first region includes a region including all of objects related to the event. In some embodiments, identifying the event comprise obtaining a first information representing that the event occurs at the first time and a second information representing that the event occurs in the first region. In some embodiments, the second device is included in a vehicle, and wherein when a path of the vehicle located in the second range is related to the first region, transmitting the second message to the vehicle. In some embodiments, each of the first device and the second device includes at least one of a moving object, a server, a mobile device, or an infrastructure device. In some embodiments, at least one of the sensors includes at least one of a LiDAR, a camera, a radar and an ultrasonic sensor. In some embodiments, there is provided a computer-readable recording medium having a program recorded thereon to perform the above-described vehicle control method and path generation method. According to still another embodiment, there is provided a method of processing sensor data obtained from a first device to control a vehicle, the method including obtaining, by a controller included in the vehicle, a first set of point data included in sensor data obtained from a first sensor included in the vehicle, wherein the first set of point data includes a first subset of point data representing at least a portion of a first object; obtaining, by the controller, a first property data of the first subset of point data corresponding to a position of the first object, wherein the first property data is represented by a first coordinate system based on a first origin; generating a first standard property data on the basis of the first property data, wherein the first standard property data is represented by a second coordinate system based on a second origin; obtaining, by the controller, a second standard property data corresponding to a position of a second object not represented by the first set of point data, wherein the second standard property data is represented by the second coordinate system; and controlling, by the controller, the vehicle on the basis of the first standard property data and the second standard property data, wherein the second standard property data is generated based on a second property data of a second subset of point data included in a second set of point data, and wherein the second set of point data is obtained from a second sensor included in the first device. In some embodiments, the generating of the first standard property data may include setting the first coordinate system in which the first property data is represented as the second coordinate system. In some embodiments, the obtaining of second standard property data may include receiving the second property data represented by a third coordinate system based on a third origin from the first device and generating the second standard property data on the basis of the second property data by aligning the third coordinate system with the second coordinate system. In some embodiments, the generating of the first standard property data may include aligning the first coordinate system in which the first property data is represented with the second coordinate system, and the generating the second standard property data may include aligning the third coordinate system in which the second property data is represented with the second coordinate system. In some embodiments, the third origin may correspond to a position of an optical origin of the second sensor included in the first device. In some embodiments, the first origin may correspond to a position of an optical origin of the first sensor included in the vehicle, and the second origin may correspond to at least one of the first origin or a predetermined static position. In some embodiments, the first property data may include at least one of a class information of the first object, a center position information indicating a center position of the first subset of point data, a size information indicating a size information of the first subset of point data, a movement information including at least one of a movement speed or a movement direction of the first subset of point data, an identification information for distinguishing the first subset of point data from other subsets of point data, and a shape information obtained by processing a shape of the first object, and the second property data may include at least one of a class information of the second object, a center position information indicating a center position of the second subset of point data, a size information indicating a size information of the second subset of point data, a movement information including at least one of a movement speed or a movement direction of the second subset of point data, an identification information for distinguishing the second subset of point data from other subsets of point data, and a shape information obtained by processing a shape of the second object. In some embodiments, the first property data may include a first center position information of the first subset of point data represented by the first coordinate system, a first standard center position information included in the first standard property data and generated based on the first center position information is represented by the second coordinate system, the second property data includes second center position information of the second subset of point data represented by a third coordinate system, and the second standard center position information included in the second standard property data and generated based on the second center position information may be represented by the second coordinate system. In some embodiments, the controlling of the vehicle may include controlling the vehicle to travel along a preset global path on the basis of a position of the vehicle and a position of a destination and generating a local path on the basis of the first standard property data and the second standard property data. According to still another embodiment, there is provided a method of processing sensor data obtained from a first device to generate a path of a vehicle, the method including, by a controller included in the vehicle, obtaining a first set of point data included in sensor data acquired from a first sensor included in the vehicle, wherein the first set of point data includes a first subset of point data representing at least a portion of a first object; determining, by the controller, a first property data of the first subset of point data, wherein the first property data corresponds to the first object; generating, by the controller, a local path of the vehicle on the basis of at least one of the first set of point data or the first property data, wherein the local path of the vehicle includes at least one of a speed of the vehicle, a direction of the vehicle, and a position of the vehicle; receiving, by the controller, a second property data determined based on a second set of point data included in sensor data acquired from a second sensor placed in the first device, and wherein the second property data corresponds to a second object that is not recognized based on the first set of point data; and generating, by the controller, a modified path by changing at least some of the position of the vehicle, the speed of the vehicle, or the direction of the vehicle in the local path of the vehicle on the basis of the second property data and at least one of the first set of point data, the first property data, or the local path. In some embodiments, the local path may at least partially overlap a certain region where the second object is positioned, and the modified path may not overlap the certain region where the second object is positioned. In some embodiments, the vehicle may be controlled to travel along a preset global path on the basis of a position of the vehicle and a position of a destination, and the generating of the local path may include generating a local path including at least a portion of a region corresponding to the field of view of the first sensor; and controlling the vehicle to travel along the local path. In some embodiments, the generating of the modified path may include determining whether to modify the path of the vehicle on the basis of the probability of movement of the vehicle predicted based on the local path of the vehicle and the probability of movement of the second object predicted based on the second property data. In some embodiments, the method may further include receiving a third property data determined based on the second set of point data acquired from the second sensor placed in the first device, wherein the third property data corresponds to a third object; comparing the third property data and the first property data and determining whether the third object and the first object are the same object, and generating a modified path for considering the third object by changing at least some of a position of the vehicle, a speed of the vehicle, or a direction of the vehicle on the basis of the third property data, the second property data, and at least one of the first set of point data, the first property data, or the local path. In some embodiments, the method may further include receiving a third property data determined based on the second set of point data acquired from the second sensor placed in the first device, wherein the third property data corresponds to a third object; and comparing the third property data and the first property data and determining whether the third object and the first object are the same object, and wherein when it is determined that the first object and the third object are the same object, the controller does not generate the modified path for reflecting the third object. In some embodiments, the modified path may include at least one of a first modified path and a second modified path, the first modified path may include a path obtained by changing at least a portion of the local path of the vehicle, and the second modified path may include a path for stopping the vehicle in the local path of the vehicle. In some embodiments, the first device may include at least one of a moving object, an infrastructure, a mobile device, or a server. In some embodiments, each of the first sensor and the second sensor may include at least one of a LiDAR, a camera, a radar, and an ultrasonic sensor. In some embodiments, there is provided a computer-readable recording medium having a program recorded thereon to perform the above-described vehicle control method and path generation method. 1. Overview of Autonomous Driving System 1.1. Advanced Driver Assistance Systems (ADAS) Advanced driver-assistance systems, which are abbreviated as “ADAS,” are systems that assist drivers in driving and may refer to systems that can reduce drivers' fatigue and help drivers to drive safely. Advanced driver-assistance systems may include various devices and systems. For example, the advanced driver-assistance systems may include an automatic vehicle navigation device, an adaptive cruise control device, a lane keeping assistance system, a lane departure prevention assistance system, a blind spot warning device, an intelligent speed adaptation system, an intelligent headlight control system, a pedestrian protection system, an automatic parking system, a traffic sign recognition system, a driver drowsiness prevention system, a vehicle communication system, a hill descent control system, an electric vehicle driving warning system, a low-beam assistance system, a high-beam assistance system, a front collision warning system, smart cruise control (SCC), navigation-based smart cruise control (NSCC), a highway driving assistance system, a rear view monitor with e-Mirror (RVM), etc., but the present invention is not limited thereto. Also, a device equipped with the driver assistance system may share data with other devices through communication. This will be described in detail below. 1.2. Autonomous Driving System (AD) Also, an autonomous driving system (e.g., autonomous driving (AD), autonomous car, driverless car, self-driving car, robotic car) may be mounted in a vehicle to enable the vehicle to automatically travel without human intervention. Also, the autonomous driving system may share data with other devices through communication. This will be described in detail below Hereinafter, for convenience of description, the above-described driver assistance system and autonomous driving system are expressed as an autonomous driving system1000. 1.3. Elements of Autonomous Driving System (AD/ADAS). The autonomous driving system1000may be mounted inside a vehicle100. Also, the autonomous driving system1000may be mounted inside an aircraft, a ship, or an unmanned aerial vehicle as well as the vehicle100, but the present invention is not limited thereto. FIG.1is a diagram illustrating elements of an autonomous driving system according to an embodiment. Referring toFIG.1, an autonomous driving system1000according to an embodiment may include various elements. For example, the autonomous driving system1000may include at least one controller1100, at least one communication module1200, at least one sensor1300, at least one infotainment system1400, etc., but the present invention is not limited thereto. Hereinafter, various examples of the elements of the autonomous driving system1000will be described in detail. 1.3.1. Controller Referring toFIG.1again, the autonomous driving system1000according to an embodiment may include at least one controller1100. Also, the controller1100may control elements of an apparatus including the controller1100. For example, the controller1100may control at least one sensor1300or at least one communication module1200included in the autonomous driving system1000, but the present invention is not limited thereto. Also, the controller1100may acquire data from the at least one sensor1300or the at least one communication module1200. For example, the controller1100may acquire data from a light detection and ranging (LiDAR) device located in a vehicle, but the present invention is not limited thereto. The controller may acquire data from various sensors and a communication module. Also, the controller1100may be used to control a vehicle. For example, the controller1100may control the speed, direction, path, or the like of the vehicle, but the present invention is not limited thereto. The controller1100may control the various operations of the vehicle. Also, the controller1100may be expressed as an ECU, a processor, or the like depending on the embodiment, but the present invention is not limited thereto. Also, in this specification, the controller1100may refer to a controller of a device where the autonomous driving system1000is placed and may also refer to a controller placed in at least one sensor. However, the present invention is not limited thereto, and the controller1100may collectively refer to at least one controller placed in the autonomous driving system1000. 1.3.2. Communication Module Referring toFIG.1again, the autonomous driving system1000according to an embodiment may include at least one communication module1200. In this case, the at least one communication module1200may be used to share at least one piece of data with other devices. As an example, the controller1100may transmit or receive data to or from the outside through the at least one communication module1200. Also, the at least one communication module1200may be used to implement at least one vehicle-to-everything (V2X) system. In detail, the communication module1200may be used to implement at least one V2X system such as a vehicle-to-vehicle (V2V) system, a vehicle-to-infra (V2I) system, a vehicle-to-network (V2N) system, a vehicle-to-pedestrian (V2P) system, and a vehicle-to-cloud (V2C) system. Also, the autonomous driving system1000may share data acquired from the at least one sensor1300and relevant property data through the at least one communication module1200, but the present invention is not limited thereto. Also, the at least one communication module1200may include at least one antenna. For example, the at least one communication module may include at least one of Global Positioning System (GPS), Global Navigation Satellite System (GNSS), Amplitude Modulation (AM), Frequency Modulation (FM), Fourth Generation (4G), and Fifth Generation (5G) antennas, but the present invention is not limited thereto. 1.3.3. Sensor Referring toFIG.1again, the autonomous driving system1000according to an embodiment may include at least one sensor1300. Also, the at least one sensor1300according to an embodiment may be used to acquire vehicle surrounding information. For example, the at least one sensor may be used to acquire distance information of an object near a vehicle, but the present invention is not limited thereto. The sensor may be used to acquire various pieces of information about an object near a vehicle. FIG.2is a diagram specifically illustrating at least one sensor according to an embodiment. Referring toFIG.2, the at least one sensor1300may include at least one LiDAR device1310, at least one camera device1320, at least one radar device1330, at least one ultrasonic sensor1340, at least one GPS sensor1350, at least one inertial measurement unit1360, and the like. It will be appreciated that the type of the sensor is not limited thereto, and the at least one sensor1300may include all or only some of the above-described sensors1310,1320,1330,1340,1350, and1360. Referring toFIG.2again, the at least one sensor1300may include at least one LiDAR device1310. In this case, the LiDAR device1310may be defined as a device that measures a distance to an object using laser beams. More specifically, the at least one LiDAR device1310may output a laser beam. When the output laser beam is reflected by an object, the LiDAR device1310may receive the reflected laser beam and measure the distance between the object and the LiDAR device1310. Here, the LiDAR device1310may measure the distance to the object by using various schemes such as a triangulation scheme and a Time-of-Flight (TOF) scheme. Also, the LiDAR device1310may include a laser beam output unit. In this case, the laser beam output unit may emit a laser beam. Also, the laser beam output unit may include one or more laser beam output elements. Also, the laser beam output units may include a laser diode (LD), a solid-state laser, a high power laser, a light-emitting diode (LED), a vertical-cavity surface-emitting laser (VCSEL), an external cavity diode laser (ECDL), etc., but the present invention is not limited thereto. Also, the LiDAR device1310may include a light-receiving unit. In this case, the light-receiving unit may detect a laser beam. For example, the light-receiving unit may detect a laser beam reflected by an object located in a scanning region. Also, the light-receiving unit may receive a laser beam and generate an electric signal on the basis of the received laser beam. For example, the sensor1300may include a PN photodiode, a phototransistor, a PIN photodiode, an avalanche photodiode (APD), a single-photon avalanche diode (SPAD), silicon photomultipliers (SiPM), a comparator, a complementary metal-oxide-semiconductor (CMOS), a charge-coupled device (CCD), or the like, but the present invention is not limited thereto. Also, the LiDAR device1310may include an optical system. In this case, the optical system may change a flight path of a laser beam. For example, the optical system may change a flight path of a laser beam emitted from the laser beam output unit such that the laser beam is directed to a scanning region. Also, the optical system may change a flight path of a laser beam by reflecting the laser beam. In this case, the optical system may include a first scanner for performing a scan in a first direction and a second scanner for performing a scan in a second direction. Also, the optical system may include a rotational optic for performing a scan while rotating both of the laser beam output unit and the light-receiving unit. For example, the optic system may include a mirror, a resonance scanner, a micro-electromechanical system (MEMS) mirror, a voice coil motor (VCM), a polygonal mirror, a rotating mirror, a Galvano mirror, or the like, but the present invention is not limited thereto. Also, the optical system may change a flight path of a laser beam by refracting the laser beam. For example, the optical system may include lenses, prisms, microlenses, microfluidic lenses, or the like, but the present invention is not limited thereto. Also, the optical system may change a flight path of a laser beam by changing the phase of the laser beam. For example, the optical system may include an optical phased array (OPA), a metalens, a metasurface, or the like, but the present invention is not limited thereto. Also, the at least one LiDAR device1310may be placed in various positions of a vehicle so as to secure a field of view of the surroundings of the vehicle. For example, the LiDAR device1310may include a plurality of LiDARs1311to1314. The plurality of LiDARs1311to1314may include one or multiple LiDARs placed in each of various positions, e.g., the front, the rear, the side, and the roof of the vehicle. In detail, when the first LiDAR1311is placed on the front of the vehicle, the first LiDAR1311may detect distance information regarding an object located in front of the vehicle, and the first LiDAR1311may be placed on a headlamp, a front bumper, a grille, or the like of the vehicle, but the present invention is not limited thereto. Also, when the second LiDAR1312is placed on the side of the vehicle, the second LiDAR1312may detect distance information of an object located to the side of the vehicle, and the second LiDAR1312may be placed on a side mirror, a side garnish, or the like of the vehicle, but the present invention is not limited thereto. Also, when the third LiDAR1313is placed on the rear of the vehicle, the third LiDAR1313may detect distance information of an object located behind the vehicle, and the third LiDAR1313may be placed on a rear bumper, a brake light, or the like of the vehicle, but the present invention is not limited thereto. Also, when the fourth LiDAR1314is placed on the roof of the vehicle, the fourth LiDAR1314may detect distance information of an object located in front of, behind, and to the side of the vehicle, and the fourth LiDAR1314may be placed on a sunroof, roof, or the like of the vehicle, but the present invention is not limited thereto. Referring toFIG.2again, the at least one sensor1300according to an embodiment may include at least one camera device1320. In this case, the at least one camera device1320may acquire shape and/or color information regarding an object located near a vehicle equipped with the autonomous driving system1000. Also, the at least one camera device1320may be placed in various positions of a vehicle so as to secure shape and/or color information regarding the surroundings of the vehicle and the interior of the vehicle. For example, the camera device1320may include a plurality of cameras1321to1323. The plurality of cameras1321to1323may include one or multiple cameras placed in each of various positions, e.g., the front, the side, the rear, and the inside of the vehicle. In detail, when the first camera1321is placed on the front of the vehicle, the first camera1321may detect shape and/or color information regarding an environment in front of the vehicle, and the first camera1321may be placed on a black box, a headlight, a grille, or the like of the vehicle, but the present invention is not limited thereto. Also, when the second camera1322is placed on the rear of the vehicle, the second camera1322may detect shape and/or color information regarding an environment behind the vehicle, and the second camera1322may be placed on a rear bumper, a brake light, or the like of the vehicle, but the present invention is not limited thereto. Also, when the third camera1323is placed inside the vehicle, the third camera1323may detect shape and/or color information regarding an environment inside the vehicle, and the third camera1323may be placed on a black box, a room mirror, or the like of the vehicle, but the present invention is not limited thereto. Also, the camera device1320may include a stereo camera. Here, the stereo camera may refer to a camera for determining a distance to an object as well as the shape of the object using a plurality of cameras. Also, the camera device1320may include a time-of-flight (ToF) camera. Here, a ToF camera may refer to a camera capable of determining a distance to an object by employing time-of-flight techniques. Referring toFIG.2again, the at least one sensor1300according to an embodiment may include at least one radar device1330. In this case, the at least one radar device1330may be a device for detecting a distance to an object and a position of an object using electromagnetic waves. Also, the at least one radar device1330may include various types of radar devices in order to acquire accurate distance information of objects located at long distances from the vehicle, objects located at medium distances, and objects located at short distances. For example, the at least one radar device1330may include a first radar1331for acquiring distance information of objects located at long distances, a second radar1332for acquiring distance information of objects located at medium or short distances, etc., but the present invention is not limited thereto. Also, the at least one radar device1330may be placed in various positions of a vehicle so as to secure a field of view of the surroundings of the vehicle. For example, the at least one radar device1330may be placed on the front, the rear, or the side of the vehicle, but the present invention is not limited thereto. Referring toFIG.2again, the at least one sensor1300according to an embodiment may include at least one ultrasonic sensor1340. In this case, the at least one ultrasonic sensor1340may be a device for detecting whether an object is present near a vehicle. Also, the at least one ultrasonic sensor1340may be placed in various positions of a vehicle so as to detect whether an object is present near the vehicle. For example, the at least one ultrasonic sensor1340may be placed on the front, the rear, or the side of the vehicle, but the present invention is not limited thereto. Referring toFIG.2again, the at least one sensor1300according to an embodiment may include at least one GPS sensor1350. In this case, the at least one GPS sensor1350may be a device for finding the global position of a vehicle. In detail, the at least one GPS sensor1350may forward global position information of the GPS sensor1350to the controller1100through the Global Positioning System. Referring toFIG.2again, the at least one sensor1300according to an embodiment may include at least one inertial measurement unit (IMU)1360. In this case, the at least one IMU1360is an electronic device that measures and reports a specific force and an angular ratio of a vehicle and a magnetic field surrounding a vehicle by using a combination of an accelerometer, a tachometer, and a magnetometer. In detail, the at least one IMU1360may be activated by detecting a linear acceleration using at least one accelerometer and by detecting a rotational speed using at least one gyroscope. 1.3.4. Infotainment System Referring toFIG.1again, the autonomous driving system1000according to an embodiment may include at least one infotainment system1400. In this case, the at least one infotainment system1400according to an embodiment may display at least one piece of information to an occupant. FIG.3is a diagram showing a display scheme through an infotainment system according to an embodiment. Referring toFIG.3, the infotainment system1400according to an embodiment may include a high-definition map1420, a message window1430, a screen1410for showing the high-definition map1420and the message window1430to an occupant, an information field1440for providing object information acquired from a sensor, etc., but the present invention is not limited thereto. Referring toFIG.3again, the infotainment system1400according to an embodiment may include a high-definition map that shows position information of a host vehicle and position information of a nearby object. In this case, the high-definition map1420may be downloaded by the controller1100. In detail, the high-definition map1420may be generated by and stored in an external server, and the controller1100may download the high-definition map1420and display the high-definition map1420to an occupant through the infotainment system1400. Also, the high-definition map1420may be generated based on sensor data acquired from the at least one sensor1300included in the autonomous driving system1000. In detail, the LiDAR device1310included in the at least one sensor1300may acquire distance information of an object outside the vehicle. In this case, the controller1100may generate a high-definition map1420including the position information of the object outside the vehicle on the basis of the distance information and may display the high-definition map1420to an occupant through the infotainment system1400. Also, the controller1100may generate the high-definition map using the sensor data on the basis of a downloaded map. In detail, the controller1100may implement the high-definition map1420by generating position information of the object using the sensor data and by showing the position information of the object in the downloaded map and then may display the high-definition map1420to an occupant through the infotainment system1400. Referring toFIG.3again, the infotainment system1400according to an embodiment may include a message window1430for displaying, to a user, a message transmitted from the outside. For example, the message window1430may include a message received from the outside, information to be forwarded to an occupant, an interface for receiving an input from an occupant, information indicating whether data transmission is approved by an external server, etc., but the present invention is not limited thereto. More specifically, when a request message for sensor data is received from an external server, the controller1100may display the request message through the message window1430. In this case, an occupant may enter an input for transmitting the sensor data in response to the request message. Also, when a notification message indicating that a traffic event has occurred is received from an external server, the controller1100may display the notification message through the message window1430. Also, the message window1430may be displayed on a separate screen different from that of the high-definition map1420. Also, the message window1430may be displayed on the same screen as the high-definition map1420. In detail, the message window1430may be displayed so as not to overlap the high-definition map1420, but the present invention is not limited thereto. Referring toFIG.3again, the infotainment system1400according to an embodiment may include a screen1410for showing the high-definition map1420and the message window1430. Also, the screen1410may include a touch sensor, an input button, etc., but the present invention is not limited thereto. In this case, when a touch input is received from an occupant, the screen1410may transmit the content of the touch input of the occupant to the controller1100. For example, when the controller1100forwards, to the occupant through the message window1430, a request message for sensor data received from an external server, the occupant may enter a response to the request message by touching the screen1410. Also, when the controller1100displays, through the message window1430, a notification message for a traffic event received from an external server, the occupant may enter an input indicating whether the notification message is confirmed. Referring toFIG.3again, the infotainment system1400according to an embodiment may include an information field1440for showing information acquired from the at least one sensor1300in the windshield of a vehicle. In this case, the windshield may include an electronic screen to show the information field1440. More specifically, in order to forward information acquired through the at least one sensor1300to an occupant, the controller1100may show the information field1440in the windshield of the vehicle through the infotainment system1400. Also, the information field1440may show class information, speed, movement direction, etc. that are acquired when a LiDAR device included in the at least one sensor1300scans an object, but the present invention is not limited thereto. The information field1440may further include a plurality of pieces of information acquired by various sensors. Also, the information field1440may be displayed on the screen1410or the windshield in an augmented reality (AR) scheme or a virtual reality (VR) scheme. 1.4. Autonomous Driving System 1.4.1. Autonomous Driving System Using Sensor 1.4.1.1. Overview An autonomous driving system1000may drive a vehicle with no or minimum driver intervention on the basis of sensor data acquired using at least one sensor1300. For example, the autonomous driving system1000may autonomously drive a vehicle on the basis of data acquired using at least one of at least one LiDAR device1310, at least one camera device1320, at least one radar device1330, and at least one ultrasonic sensor1340which are placed inside the vehicle. Also, the autonomous driving system1000may perform simultaneous localization and mapping (SLAM)-based autonomous driving and high-definition-map-based autonomous driving on the basis of the sensor data, but the present invention is not limited thereto. In detail, a vehicle that performs the SLAM-based autonomous driving may travel autonomously by recognizing a surrounding environment through the at least one sensor1300, creating a map of a corresponding space, and accurately determining its own position. In addition, a vehicle that performs high-definition-map-based autonomous driving may travel autonomously by recognizing an object near the vehicle on the basis of a high-precision map acquired from the controller1100. Also, the autonomous driving system1000may perform pedestrian detection, collision avoidance, traffic information recognition, parking assistance, surround view, proximity collision risk detection, etc. through the at least one sensor1300, but the present invention is not limited thereto. Hereinafter, specific examples of the autonomous driving system using at least one sensor will be described in detail. 1.4.1.2. Autonomous Driving System for Safety. The autonomous driving system1000may include a system for the safety of pedestrians and occupants of a vehicle equipped with the autonomous driving system1000. Also, the safety system may operate based on sensor data acquired from at least one sensor1300included in the autonomous driving system1000. The description of the autonomous driving system for safety is about various examples controlled by an autonomous vehicle and may be implemented with the following descriptions in Sections 2 to 5. The autonomous driving system1000may detect a driving pattern of a nearby moving object and then detect an abnormal driving pattern of the moving object. FIG.4is a diagram showing a situation in which an autonomous driving system detects a moving object showing an abnormal driving pattern according to an embodiment. Referring toFIG.4, a first vehicle101equipped with the autonomous driving system1000may detect a driving pattern of a nearby object through at least one sensor1300included in the autonomous driving system1000. More specifically, the controller1100included in the autonomous driving system1000may detect a driving pattern of a second vehicle102located near the first vehicle101on the basis of sensor data acquired from the at least one sensor1300. Also, the controller1100may track the movement of the second vehicle102in order to detect an abnormal driving pattern of the second vehicle102. In detail, when the speed and direction of the second vehicle102irregularly change, the controller1100may control the at least one sensor1300to track the movement of the second vehicle102. Also, the controller1100may determine whether the driving pattern of the second vehicle102is abnormal on the basis of the sensor data. In detail, the controller1100may acquire movement information including the speed and direction of the second vehicle102through the at least one sensor1300. In this case, the controller1100may determine that the change in the speed and direction of the second vehicle102is abnormal on the basis of the movement information. Also, the controller1100may set a driving-related threshold to detect an abnormal driving pattern of the second vehicle102. In detail, the controller1100may quantify the movement of the second vehicle102acquired through the at least one sensor1300and compare the quantified movement to the threshold. In this case, when the movement of the second vehicle102exceeds the threshold, the controller1100may determine that the second vehicle102has an abnormal driving pattern. Also, when the abnormal driving pattern of the second vehicle102is detected, the controller1100may control the first vehicle101to avoid a collision with the second vehicle102. For example, the controller1100may decelerate the first vehicle101, accelerate the first vehicle101, or re-route the first vehicle101, but the present invention is not limited thereto. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may detect a moving object having an abnormal driving pattern by utilizing distance information acquired through the LiDAR device1310. In this case, the controller1100may generate information regarding the position and speed of an object present in the field of view of the LiDAR device1310on the basis of distance information of the object. In detail, the autonomous driving system1000may generate a vector map of a nearby object using data acquired from the LiDAR device1310. In more detail, the controller1100may acquire a vector map including the speed and the like of the second vehicle102on the basis of distance information of the second vehicle102acquired by the LiDAR device1310. Also, the autonomous driving system1000may determine whether the second vehicle102has an abnormal driving pattern using the vector map. Also, the controller1100may control the first vehicle on the basis of the vector map. Also, the autonomous driving system1000may compute a space where a vehicle can move in case an emergency occurs in the vicinity. FIG.5is a diagram showing a situation in which an autonomous driving system recognizes an accident of a vehicle in front while driving according to an embodiment. Referring toFIG.5, a first vehicle103equipped with the autonomous driving system1000may detect a space where the first vehicle103can move through at least one sensor1300included in the autonomous driving system1000. In detail, a controller1100included in the autonomous driving system1000may pre-compute a space200where the first vehicle103can move on the basis of sensor data acquired from the outside or the at least one sensor1300. In detail, the controller1100may compute spaces where no object is detected and which has a predetermined volume in a space indicated by the sensor data. Also, the controller1100may select a space in which the first vehicle103can travel from among the computed spaces and store the selected space. For example, when the available space200is in a diagonal direction of the first vehicle103, the controller1100may store information related to the available space200. However, the present invention is not limited thereto, and the controller1100may store information related to the space200where the first vehicle103can move without risk of collision with a nearby object among spaces which are not set as the driving path of the first vehicle103. Also, when an emergency accident occurs in front of the first vehicle103, the controller1100may move the first vehicle103to the available space200using previously stored space information. Also, when the controller1100recognizes the occurrence of the emergency near the first vehicle103, the controller1100may compute the space200where the first vehicle103can move. In detail, when the controller recognizes an accident between a second vehicle104and a third vehicle105on the basis of the sensor data, the controller1100may compute the space200where the first vehicle103can move. In this case, the controller1100may recognize the accident through a relative position between a set of data corresponding to the second vehicle104and a set of data corresponding to the third vehicle105, which are included in the sensor data, but the present invention is not limited thereto. Also, when the controller1100computes the space200where the first vehicle103can move, the controller1100may control the first vehicle103to move the first vehicle103to the available space200. For example, the controller1100may control the steering of the first vehicle103to move the first vehicle103to the available space200, but the present invention is not limited thereto. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may acquire empty-space data using data acquired from the LiDAR device1310. In this case, the controller1100may generate information regarding the position and speed of an object placed in the field of view of the LiDAR device1310on the basis of distance information of the object. In detail, the controller1100may generate a three-dimensional (3D) map using position information of an object near the first vehicle103which is acquired by the LiDAR device1310. In this case, the controller1100may store a space of the 3D map where there is no object data as data regarding the available space200. Also, when an emergency occurs near the first vehicle103, the controller1100may move the first vehicle103to the available space200using the stored space data. Also, when the autonomous driving system1000recognizes that a second vehicle107located in front of a first vehicle106is suddenly moving backward, the autonomous driving system1000may control the first vehicle106to avoid a collision with the second vehicle107. FIG.6is a diagram showing a situation in which an autonomous driving system recognizes a sudden backward movement of a vehicle in front according to an embodiment. Referring toFIG.6, the first vehicle106equipped with the autonomous driving system1000may detect the movement of the second vehicle107through the at least one sensor1300included in the autonomous driving system1000. For example, the controller1100included in the autonomous driving system1000may detect a movement direction of the second vehicle107located in front of the first vehicle106on the basis of the sensor data acquired from the at least one sensor1300. More specifically, the controller1100may acquire movement information including the movement speed and movement direction of the second vehicle107through the at least one sensor1300. In this case, when the controller1100determines that the second vehicle107moves backward on the basis of the movement information, the controller1100may transmit a notification for warning the second vehicle107to the second vehicle107. Also, the controller1100may sound a horn to warn the second vehicle107. Also, when the controller1100determines that there is a space where the first vehicle106can move behind the first vehicle106, the controller1100may move the first vehicle106to the space to which movement is possible. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may detect whether the second vehicle107moves backward using data acquired from the LiDAR device1310. In this case, the controller1100may generate movement information indicating the movement direction and movement speed of the second vehicle107on the basis of position information of the second vehicle107located in the field of view of the LiDAR device1310. More specifically, the controller1100may determine whether the second vehicle107moves backward on the basis of the movement information of the second vehicle107. For example, when the second vehicle107approaches the first vehicle106, the controller1100may determine that the second vehicle107is moving backward. Also, when the distance between the first vehicle106, which is stopped, and the second vehicle107decreases, the controller1100may determine that the second vehicle170is moving backward. Also, the autonomous driving system1000may detect a change in the direction of a second vehicle109located near a first vehicle108. FIG.7is a diagram showing a situation in which an autonomous driving system tracks the movement of a vehicle's wheel according to an embodiment. Referring toFIG.7, the first vehicle108equipped with the autonomous driving system1000may detect a change in the direction of the second vehicle109through at least one sensor1300included in the autonomous driving system1000. For example, the controller1100included in the autonomous driving system1000may detect a change in the direction of the second vehicle109by detecting a wheel109aof the second vehicle109located near the first vehicle108using sensor data acquired through the at least one sensor1300 In this case, when an object included in the sensor data is determined as the wheel109aof the second vehicle, the controller1100may track the wheel109aof the second vehicle. Also, the controller1100may control a scan pattern of the at least one sensor1300to continuously acquire sensor data regarding the wheel109aof the second vehicle. Also, when the wheel109aof the second vehicle is directed to the front of the first vehicle108, the controller1100may control the first vehicle108to prevent the first vehicle108from colliding with the second vehicle109. For example, the controller1100may decelerate the first vehicle108or re-route the first vehicle108, but the present invention is not limited thereto. Also, when the direction of the wheel109aof the second vehicle changes suddenly, the controller1100may decelerate the first vehicle108or re-route the first vehicle108regardless of the current direction of the wheel109aof the second vehicle. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may detect a change in the direction of the wheel109aof the second device using data acquired from the LiDAR device1310. In this case, the controller1100may detect the movement of the wheel109aof the second vehicle using temporal position information of the wheel109aof the second wheel located in the field of view of the LiDAR device1310. More specifically, the controller1100may generate a 3D map including data on the wheel109aof the second vehicle or predicted movement information of the second vehicle109which is predicted through the data on the wheel109aof the second vehicle by using the LiDAR device1310. In this case, the 3D map may include position information of the wheel109aof the second vehicle that changes over time. Also, the controller1100may detect the change in the direction of the wheel109aof the second vehicle using the 3D map to control the first vehicle108. Also, the autonomous driving system1000may detect a risk factor of a road on which a first vehicle110is traveling (e.g., a crack in the road or black ice present on the road). FIG.8is a diagram illustrating a method of detecting, by an autonomous driving system, black ice present on a road according to an embodiment. Referring toFIG.8, the first vehicle110equipped with the autonomous driving system1000may detect a surface condition of the road on which the first vehicle110is traveling through the at least one sensor1300included in the autonomous driving system1000. For example, the controller1100included in the autonomous driving system1000may detect a crack in the road on which the first vehicle is traveling on the basis of the sensor data acquired from the at least one sensor1300. Also, the controller1100may detect black ice present on the road on the basis of the sensor data, but the present invention is not limited thereto. Also, the LiDAR device1310included in the at least one sensor1300may acquire sensor data including intensity information associated with the reflectance of an object. In detail, the sensor data may include intensity information of a first region300included in the field of view of the at least one sensor1300. In this case, the intensity information may include an intensity value311, which is a value corresponding to the reflectance of the object. Also, a mean, a deviation, and a standard deviation may be used as the intensity value included in the intensity information, and at least one piece of data may be amplified, but the present invention is not limited thereto. Also, the controller1100may determine the risk of the road on the basis of an intensity distribution chart310representing a space-specific intensity distribution chart310of intensity values included in the intensity information. In this case, the intensity distribution chart310may include an intensity value311for each point of the first region300. Also, when the intensity value311changes rapidly with respect to a predetermined boundary312in the intensity distribution chart310of the first region300, the controller1100may determine that a region within the predetermined boundary312is a dangerous region. Also, the controller1100may set an intensity threshold using the average of intensity values for each region of the road. In detail, the controller1100may compute the average of intensity values of each point on the road on which the vehicle is traveling and may set an intensity threshold on the basis of the average. In this case, the controller1100may compare the intensity threshold to the average of the intensity values of each point of the first region300. Also, when the comparison result is that the average of the intensity values of the first region300is greater than or equal to the intensity threshold, the controller1100may determine that the first region300is a dangerous region. Also, the controller1100may adjust sensor activation energy in order to detect a road risk using the sensor data acquired through the at least one sensor1300. For example, the controller1100may adjust the sensor activation energy, detect a corresponding pattern, and detect a road risk, but the present invention is not limited thereto. Also, the dangerous region may include a region that may become dangerous to the driving of the first vehicle110. For example, the dangerous region may include a region having black ice and a region having a road crack, but the present invention is not limited thereto. Also, the autonomous driving system1000may detect an illegally parked vehicle through sensor data. More specifically, when a vehicle is stopped on a road, the autonomous driving system1000may determine whether a space associated with the stopped vehicle is an available parking space and may determine that the vehicle is an illegally parked vehicle when the vehicle is stopped for a predetermined time or more even though the space is not an available parking space. In this case, the autonomous driving system1000may detect a parking line through at least one sensor1300and determine whether parking is available on the basis of the detected parking line. Also, the autonomous driving system1000may determine an available parking region using a prestored map. Also, the autonomous driving system1000may compare the width of a road to the width of a first vehicle111equipped with the autonomous driving system1000and may determine whether the first vehicle111can travel on the road. FIG.9is a diagram showing a situation in which a vehicle equipped with an autonomous driving system detects an illegally parked vehicle according to an embodiment. Referring toFIG.9, the controller1100included in the autonomous driving system1000may determine whether the first vehicle111can move while avoiding a second vehicle112which is illegally parked on a road. In detail, the controller1100included in the autonomous driving system1000may compute a space in which the first vehicle111can travel on the basis of sensor data acquired from the at least one sensor1300. For example, when the second vehicle112is stopped on a road on which the first vehicle111is traveling, the controller1100may compare the width pa of the travelable road to the width pb of the first vehicle. In this case, the width pb of the first vehicle may be prestored in the controller1100. Also, when the width pa of the road is greater than the width pb of the first vehicle by a predetermined length or more, the controller1100may control the first vehicle111such that the first vehicle111can travel on the traveling road while avoiding the second vehicle112. Also, the controller1100may determine a space between a center line and the second vehicle112on the basis of the sensor data. In this case, the controller1100may determine whether the space is a space through which the first vehicle111can pass and then may control the first vehicle. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may detect a space in which the first vehicle111can travel on the basis of distance information acquired from the LiDAR device1310. In this case, the controller1100may generate position information of the center line and the second vehicle112on the basis of distance information of the centerline and the second vehicle112. More specifically, the controller1100may generate a 3D map on the basis of the sensor data acquired from the LiDAR device1310. In this case, the controller1100may determine a space in which the first vehicle111can travel on the basis of the 3D map. Also, the autonomous driving system1000may detect an object approaching a vehicle equipped with the autonomous driving system1000within a dangerous radius. In detail, the autonomous driving system1000may determine the speed, direction, and the like of a two-wheeled vehicle approaching in the vicinity on the basis of the sensor data acquired from the at least one sensor1300. In this case, the controller1100may display the speed and direction of the two-wheeled vehicle to an occupant through the infotainment system. Also, when the controller1100determines that the two-wheeled vehicle is located within a dangerous radius on the basis of the speed and direction, the controller may inform an occupant of the presence of the two-wheeled vehicle. For example, the controller1100may perform an operation of locking the doors of the vehicle, an operation of notifying of danger through the infotainment system1400, an operation of displaying the presence of the two-wheeled vehicle to the side mirror of the vehicle, and the like, but the present invention is not limited thereto. Also, the autonomous driving system1000may further include a short-range LiDAR device in order to clearly determine the presence of the two-wheeled vehicle. In this case, the short-range LiDAR device may acquire distance information of an object close to the vehicle and provide the distance information to the controller1100. However, the present invention is not limited thereto, and the autonomous driving system1000may further include at least one sensor for detecting a nearby object. Also, a first vehicle equipped with the autonomous driving system1000may detect a situation in which an oncoming vehicle makes a sudden U-turn through the sensor data. In detail, the controller1100included in the autonomous driving system1000may form a vector map including the speed and direction of a second vehicle, which is oncoming, through sensor data acquired from a LiDAR device included in the at least one sensor1300. Also, the autonomous driving system1000may detect whether the second vehicle is making a U-turn using the vector map. Also, when the second vehicle makes a sudden U-turn, the controller1100may control the speed of the first vehicle. Also, before the first vehicle equipped with the autonomous driving system1000departs, the autonomous driving system1000may detect whether there is an object near the first vehicle. More specifically, the controller1100included in the autonomous driving system1000may control at least one sensor1300to determine whether there is an object near the first vehicle before moving the first vehicle. For example, when a cat is present under the first vehicle, the at least one sensor1300may detect the presence of the cat and transmit the presence of the cat to the controller1100. In this case, the controller1100may stop the first vehicle until the cat leaves. Also, the autonomous driving system1000may track a pedestrian near the first vehicle equipped with the autonomous driving system1000and prepare for a dangerous situation. Here, the pedestrian may include various people such as men, women, children, and the elderly. According to an embodiment, the autonomous driving system1000may identify the type of the pedestrian. In detail, the controller1100included in the autonomous driving system1000may detect the movement of a pedestrian within a predetermined distance from the vehicle through at least one sensor1300. Also, when the pedestrian disappears from the field of view of the at least one sensor, the controller1100may generate tracking data for predicting the movement direction of the pedestrian by using already acquired position information of the pedestrian. Also, the controller1100may prestore a control method to prepare for a situation in which the pedestrian suddenly enters a road on the basis of the tracking data. For example, the control method may include stopping the vehicle or changing a path of the vehicle, but the present invention is not limited thereto. Also, the autonomous driving system1000may determine a region related to legal regulations such as a child protection zone and control the vehicle. In detail, the autonomous driving system1000may determine a child protection zone by scanning a sign indicating the child protection zone through at least one sensor1300. Also, the autonomous driving system1000may determine a child protection zone using prestored information related to the child protection zone. In this case, when the vehicle equipped with the autonomous driving system1000enters a child protection zone, the controller1100may control the vehicle to travel at a predetermined speed or less. 1.4.1.3. Autonomous Driving System for Convenience. The autonomous driving system1000may include a system for the convenience of occupants of a vehicle equipped with the autonomous driving system1000. Also, the system for the convenience may operate based on sensor data acquired from at least one sensor1300included in the autonomous driving system1000. The description of the autonomous driving system for the convenience is about various examples controlled by an autonomous vehicle and may be implemented with the following descriptions in Sections 2 to 6. The autonomous driving system1000may detect an available parking space to assist an occupant in parking the vehicle. FIG.10is a diagram showing a situation in which an autonomous driving system detects an available parking space according to an embodiment. Referring toFIG.10, a first vehicle113equipped with the autonomous driving system1000may detect an available parking space through the at least one sensor1300. Also, the controller1100included in the autonomous driving system1000may detect a parking line10on the basis of sensor data acquired from the at least one sensor1300. For example, the controller1100may acquire intensity information associated with the reflectance of an object through the LiDAR device1310included in the at least one sensor1300. In this case, the controller1100may determine that the object is the parking line10on the basis of the intensity information. Also, the controller1100may detect whether an obstacle is present in a space formed in the detected parking line10. In this case, when no obstacle is present in the space formed in the parking line10, the controller1100may determine that the space is an available parking space. Also, the controller1100may detect an available parking space by detecting a second vehicle114, which has been parked, on the basis of the sensor data. In detail, when data20corresponding to an exterior of the parked second vehicle is included in the sensor data, the controller1100may not determine whether the second vehicle114is present in an available parking position. Also, the controller1100may detect an available parking space on the basis of parked vehicles. In detail, when the space between the parked vehicles is larger than or equal to a certain area, the controller1100may recognize that the space is an available parking space on the basis of the sensor data. Also, when the space between the parked vehicles is larger than or equal to a certain area even though the parking line10is not detected, the controller1100may recognize that the space is an available parking space. Also, the autonomous driving system1000is not limited to the above-described method and may detect an available parking space on the basis of the parking line10and the parked vehicle. Also, the autonomous driving system1000may generate a map each time the first vehicle115is parked and pulled out. FIG.11is a diagram showing a process of generating, by an autonomous driving system, a map for pulling out a vehicle according to an embodiment. Referring toFIG.11, the autonomous driving system1000may form a map each time the first vehicle115is parked and pulled out on the basis of sensor data acquired through at least one sensor placed in the first vehicle115. In this case, the autonomous driving system1000may acquire sensor data regarding surroundings during a first drive and may generate a path for a second drive on the basis of the sensor data acquired during the first driving. In detail, the controller1100included in the autonomous driving system1000may generate a map of the surroundings of the first vehicle115on the basis of the sensor data acquired during the first driving. Also, the controller1100may generate a path for the second driving on the basis of the map. Also, when the at least one sensor1300is the LiDAR device1310, the controller1100may generate a 3D map on the basis of data acquired through the LiDAR device1310. In detail, the controller1100may generate the 3D map on the basis of surrounding position information acquired from the LiDAR device1310during the first driving of the first vehicle115. Also, the controller1100may generate a path for the second driving on the basis of the 3D map. Also, the autonomous driving system1000may include an autonomous parking system. The autonomous parking system may utilize the sensor data. Also, the autonomous parking system may be activated by an input from an occupant. Also, the autonomous parking system may be activated when a parking situation is recognized. In an embodiment, the autonomous driving system1000may implement an autonomous parking system when a vehicle is located in a specific space. For example, when a vehicle is located in a specific space and an occupant gets out of the vehicle, the autonomous driving system1000may recognize a situation in which the vehicle is being parked and thus implement an autonomous parking system. 1.4.2. Autonomous Driving System Using Sensor and Communication 1.4.2.1. Overview The autonomous driving system1000may be implemented using sensor data acquired from the at least one sensor1300and sharing data received from other devices. The autonomous driving system1000may communicate with other devices through the at least one communication module1200to share data. Also, the autonomous driving system1000may use a communication system to predetermine a risk factor associated with the driving of the vehicle equipped with the autonomous driving system1000. FIG.12is a diagram illustrating the type of a communication system according to an embodiment. Referring toFIG.12, an autonomous driving system1000may be implemented through various communication systems. For example, the communication system may implement at least one V2X system such as a vehicle-to-vehicle (V2V) system, a vehicle-to-infra (V2I) system, a vehicle-to-network (V2N) system, a vehicle-to-pedestrian (V2P) system, a vehicle-to-cloud (V2C) system, and a vehicle-to-device (V2D) system. Also, the autonomous driving system1000may use at least one standardized communication system to communicate with other devices. For example, the autonomous driving system1000may use cellular vehicle-to-everything (C-V2X) and dedicated short-range communication (DSRC) to communicate with other devices, but the present invention is not limited thereto. In this case, the C-V2X may refer to a 3rd Generation Partnership Project (3GPP) standard indicating a technology for performing V2X communication. Also, the DSRC may refer to a one-way or two-way short-range wireless communication channel designed for a set of protocols and standards corresponding to vehicles. 1.4.2.1.1. V2V A first vehicle equipped with the autonomous driving system1000may communicate with other devices using at least one communication module1200. Referring toFIG.12again, the first vehicle may communicate with other vehicles through a V2V system to share data. Also, the V2V system may be implemented to transmit or receive sensor data acquired from at least one sensor1300included in the first vehicle to or from other vehicles. Also, the V2V system may be implemented to transmit or receive information other than the sensor data. For example, the V2V system may be implemented to transmit a destination of the first vehicle, the number of passengers in the first vehicle, the speed of the first vehicle, and the like, but the present invention is not limited thereto. Also, for the safety of occupants and passengers, the first vehicle may use the V2V system. For example, the first vehicle may receive information regarding a dangerous object present on the path of the first vehicle from other vehicles through the V2V system. 1.4.2.1.2. V2I A first vehicle equipped with the autonomous driving system1000may communicate with an infrastructure device through at least one communication module1200. In this case, the infrastructure device may refer to basic facilities and systems that form an industrial or transportation base. For example, the infrastructure device may include traffic lights, speed cameras, road signs, etc., but the present invention is not limited thereto. Also, the infrastructure device may include at least one sensor. In detail, the infrastructure device may include the at least one sensor in order to detect a dangerous situation that may happen to vehicles and pedestrians on roads. For example, the at least one sensor may include a LiDAR device, a camera device, etc., but the present invention is not limited thereto. Referring toFIG.12again, the first vehicle may communicate with the infrastructure device through a V2I system to share data. Here, the infrastructure device may be controlled by an external server or may perform communication to share data without the control of an external server. Also, the V2I system may be implemented to transmit sensor data acquired from at least one sensor included in the first vehicle to the infrastructure device. Also, the V2I system may be implemented to transmit sensor data acquired from at least one sensor included in the infrastructure device to the first vehicle. Also, the V2I system may be implemented to transmit information other than the sensor data. In detail, the infrastructure device may transmit regulation information for a space where the infrastructure device is placed to the first vehicle. For example, the infrastructure device may transmit information indicating that the space where the infrastructure device is placed is a child protection zone to the first vehicle. Also, when the first vehicle enters a specific zone, the first vehicle may receive sensor data from the infrastructure device. For example, when the first vehicle enters a child protection zone, the first vehicle may receive sensor data acquired from an infrastructure device installed in the child protection zone through the V2I system. 1.4.2.1.3. V2C The first vehicle equipped with the autonomous driving system1000may communicate with a server through the communication module1200. In this case, the server may be included in a computer of an institution for controlling road conditions. For example, the server may include a cloud of a road control system, but the present invention is not limited thereto. Also, the server may include a local server associated with a predetermined region, a global server for controlling a plurality of local servers, etc., but the present invention is not limited thereto. Referring toFIG.12again, the first vehicle may communicate with the server through the V2C system to share data. Also, the V2C system may be implemented to transmit sensor data acquired from at least one sensor included in the first vehicle to the server. Also, the V2C system may be implemented to transmit information other than the sensor data. Also, the first vehicle may receive information regarding an accident from the server. For example, the server may transmit information indicating that a traffic accident occurred on a path of the first vehicle to the first vehicle through the V2C system, but the present invention is not limited thereto. Hereinafter, specific embodiments of the autonomous driving system using sensors and communication will be described in detail. 1.4.2.2. Autonomous Driving System for Safety—Based on Sensors and Communication The autonomous driving system1000may use sensor data and communication-based sharing data in order to protect the safety of pedestrians and occupants of a vehicle equipped with the autonomous driving system1000. In this case, it will be appreciated that various embodiments described in Section 1.4.1.2 in which sensor data is used may be applied to an autonomous driving system using sensor data and communication-based sharing data. The autonomous driving system1000may detect the occurrence of a traffic event through sensors and communication. FIG.13is a diagram showing a situation in which a traffic event has occurred in front of a vehicle equipped with an autonomous driving system according to an embodiment. Referring toFIG.13, when a first vehicle116acquires sensor data regarding a traffic event having occurred during driving, the first vehicle116may transmit the sensor data to a server400or vehicles117and118associated with the traffic event. Also, when a traffic event has occurred due to a collision between the second vehicle117and the third vehicle118, the vehicles117and118associated with the traffic event may transmit information indicating that the traffic event has occurred to the server400. In this case, the server400may transmit the information indicating that the traffic event has occurred to the first vehicle116located near where the traffic event has occurred. Also, the autonomous driving system1000may recognize that a vehicle stopped in front of a vehicle equipped with the autonomous driving system is a shared vehicle through communication and may acquire information regarding the shared vehicle through communication with the shared vehicle. For example, a taxi may interfere with the passage of the vehicle while a passenger gets out of the taxi, and thus the taxi may transmit information related to the current situation to the vehicle. For example, the taxi may transmit a message indicating that a passenger is getting out of the vehicle. In this case, the vehicle may determine that the taxi is not an illegally parked vehicle through sensor data acquired from at least one sensor and sharing data transmitted from the taxi. Also, the communication entity is not limited to taxis and may include various types of shared vehicles such as buses. 1.4.2.3. Autonomous Driving System for Convenience—Based on Sensors and Communication The autonomous driving system1000may use sensor data and communication-based sharing data in order to provide convenience to pedestrians and occupants of a vehicle equipped with the autonomous driving system1000. In this case, it will be appreciated that various embodiments described in Section 1.4.1.3 in which sensor data is used may be applied to an autonomous driving system using sensor data and communication-based sharing data. Also, the autonomous driving system may acquire information regarding an available parking space in a parking lot through sensors and communication. FIG.14is a diagram showing a situation in which a vehicle equipped with an autonomous driving system recognizes an available parking space through communication with an infrastructure device in a parking lot according to an embodiment. Referring toFIG.14, at least one infrastructure device700may be placed in a parking lot. The at least one infrastructure device700may include at least one sensor in order to acquire information regarding an available parking space in the parking lot. Also, the infrastructure device700may store information regarding an available parking space included in sensor data acquired through a sensor. Also, when a first vehicle119enters the parking lot, the infrastructure device700may transmit the stored information regarding the available parking space to the first vehicle119. In this case, a controller of the first vehicle119may move the first vehicle to the available parking space on the basis of the information regarding the available parking space. In this process, the controller may additionally detect a parking space using sensor data obtained through a sensor placed in the first vehicle119. Also, in order to determine whether the first vehicle119can actually park in the available parking space at which the first vehicle119has arrived according to the information regarding the available parking space, the autonomous driving system1000may acquire sensor data regarding the available parking space using at least one sensor placed in the first vehicle119. Also, when a second vehicle120, which has been parked, exits the parking lot, the second vehicle120may transmit information regarding the space where the second vehicle120was parked to the infrastructure device700. In this case, the infrastructure device700may recognize the available parking space by receiving the information regarding the space where the second vehicle120had been parked and storing the received information. Hereinafter, the sensor data and the sharing data will be described. 2. Sensor Data Used by Autonomous Driving System 2.1. Type of Sensor The autonomous driving system1000may include at least one sensor1300. Referring toFIG.2again, the at least one sensor1300may include various types of sensors. For example, the at least one sensor1300may include at least one LiDAR device1310, at least one camera device1320, at least one camera device1320, at least one radar device1330, at least one ultrasonic sensor1340, etc., but the present invention is not limited thereto. 2.2. Sensor Data The autonomous driving system1000may acquire sensor data through the at least one sensor1300. In this case, the sensor data may include raw data acquirable from the at least one sensor1300or data obtained by processing the raw data. Also, the sensor data may include information related to an object detected by the at least one sensor1300. For example, the sensor data may include position information of the object, distance information of the object, shape and/or color information of the object, property information of the object, etc., but the present invention is not limited thereto. Also, the sensor data may include data regarding a single point or data regarding a plurality of points, which is acquired from the at least one sensor1300, or processed data which is obtained by processing the data regarding the single point or the data regarding the plurality of points. Hereinafter, as a specific example, the sensor data may include a set of point data, point data, a subset of point data, property data, etc. However, the present invention is not limited thereto, and this will be described in detail. FIG.15is a diagram showing a situation in which a vehicle equipped with an autonomous driving system acquires sensor data regarding an environment around the vehicle through at least one sensor according to an embodiment. For example, when the sensor is the LiDAR device, the sensor data may include point data of each point scanned by the LiDAR device, a set of point data, a subset of point data, property data obtained by processing the subset of point data, or the like, but the present invention is not limited thereto. In this case, the vehicle may detect buildings, vehicles, pedestrians, and the like around the vehicle by using at least one of the point data, the set of point data, the subset of point data, or the property data. For convenience of description, the following description with reference toFIGS.15to84will focus on sensor data of the LiDAR device, but the present invention is not limited thereto. It will be appreciated that sensor data of sensors other than the LiDAR device is applicable toFIGS.15to84. FIG.16is a diagram showing, on a 3D map, sensor data acquired by the LiDAR device placed in the vehicle ofFIG.15. Referring toFIGS.15and16, the controller included in the autonomous driving system may form a 3D point data map on the basis of data acquired from the LiDAR device. In this case, the 3D point data map may refer to a 3D point cloud. Also, the sensor data may include data included in the 2D point data map. Also, the position of the origin of the 3D point data map may correspond to the optical origin of the LiDAR device, but the present invention is not limited thereto. The position of the origin of the 3D point data map may correspond to the position of the center of gravity of the LiDAR device or the position or the position of the center of gravity of the vehicle where the LiDAR device is placed. FIG.17is a diagram schematically showing sensor data included in the 3D map ofFIG.16in a 2D plane. Referring toFIG.17, sensor data2000may be expressed in a 2D plane. For example, the sensor data may be expressed in the x-z plane, but the present invention is not limited thereto. Also, in the specification, the sensor data may be expressed in the 2D plane, but this is for schematically representing data on a 3D map. Also, the sensor data2000may be expressed in the form of a data sheet. A plurality of pieces of information included in the sensor data2000may be expressed as values in the data sheet. Hereinafter, the sensor data and the meanings of various forms of data included in the sensor data will be described. 2.2.1. Point Data The sensor data2000may include point data. In this case, the point data may refer to data that can be primarily acquired when the at least one sensor1300detects an object. Also, the point data may refer to raw data which is original information acquired from the at least one sensor and which is not processed. For example, when the sensor is a LiDAR device, the point data may correspond to one point included in a point cloud acquired from the LiDAR device. FIG.18is a diagram illustrating point data acquired from at least one LiDAR device included in an autonomous driving system according to an embodiment. Referring toFIG.18, the LiDAR device may acquire point data2001by scanning at least a portion of an object, and the point data2001may include position coordinates (x, y, z). Also, in some embodiments, the point data2001may further include an intensity value I. In this case, the position coordinates (x, y, z) may be generated based on information regarding a distance to at least a portion of the object, and the information is acquired by the LiDAR device. In detail, the LiDAR device may compute a distance to at least a portion of the object on the basis of a time point at which a laser beam is emitted and a time point at which a reflected laser beam is received. Also, based on the distance, the LiDAR device may generate position coordinates of at least a portion of the object in a Cartesian coordinate system based on the optical origin of the LiDAR device. Also, the intensity value I may be generated on the basis of the reflectance of at least a portion of the object acquired by the LiDAR device. In detail, the magnitude (or strength) of a signal received from the LiDAR device varies depending on the reflectance even if the object is at the same distance. Thus, the LiDAR device may generate an intensity value of at least a portion of the object on the basis of the magnitude (or strength) of the received signal. Also, the number of pieces of point data2001may correspond to the number of laser beams emitted from the LiDAR device, scattered by an object, and then received by the LiDAR device. More specifically, it is assumed that a laser beam emitted from the LiDAR device is scattered by at least a portion of the object and is received by the LiDAR device. Each time the laser beam is received, the LiDAR device may process a signal corresponding to the received laser beam to generate the point data2001. However, the present invention is not limited thereto, and when the sensor is a camera device, the sensor data2000may include the point data2001. In this case, the point data2001may correspond to one pixel acquired from the camera device. In detail, the point data2001may correspond to one pixel acquired through an RGB sensor included in the camera device. For example, when a plurality of pixels are present in a light-receiving unit of a camera, the point data2001may be generated for each pixel, and the point data2001may include pixel values (e.g., RGB color values in the case of an RGB sensor) of the pixels and position information of an object corresponding to the position of the pixels. Also, the point data2001may include shape and/or color information acquired from the camera device. However, the present invention is not limited thereto, and when the sensor is a radar device, the point data2001may correspond to one point acquired from the radar device. In detail, the point data2001may include position coordinates acquired from the radar device. For example, in the case of a radar, a plurality of Tx antennas transmit a plurality of radio waves, and a plurality of Rx antennas receive a plurality of radio waves which are scattered by an object and then returned. In this case, the radar may acquire position information of the object with respect to the plurality of received radio waves, and the point data2001may indicate the position information of the object with respect to one of the plurality of radio waves. 2.2.2. Set of Point Data The sensor data may include a set of point data2100. In this case, the set of point data2100may include multiple pieces of point data2001. Also, the set of point data2100may be included in one frame. In some embodiments, the set of point data2100may be included in multiple frames. For example, when the sensor is a LiDAR device, the sensor data may include the set of point data2100, and the set of point data2100may correspond to a point cloud of one frame acquired from the LiDAR device. FIG.19is a diagram illustrating a set of point data acquired from the LiDAR device included in the vehicle ofFIG.16. Referring toFIG.19, the set of point data2100shown inFIG.19may be acquired from the LiDAR device. Also, the set of point data2100may refer to a plurality of pieces of point data that are generated when the LiDAR device scans the field of view of the LiDAR device once. For example, when the horizontal field of view of the LiDAR device is 180 degrees, the set of point data2100may refer to all point data acquired when the LiDAR device scans 180 degrees once. Also, the set of point data2100may include the position coordinates (x, y, z) and intensity value I of an object present in the field of view of the LiDAR device. Also, the position coordinates (x, y, z) and intensity value I of the point data2001included in the set of point data2100may be expressed in a data sheet. Also, the set of point data2100may include noise data. The noise data may be generated by an external environment regardless of the object located in the field of view of the LiDAR device. For example, the noise data may include noise due to interference between LiDARs, noise due to ambient light such as sunlight, noise due to an object outside a measurable range, etc., but the present invention is not limited thereto. Also, the set of point data2100may include background information. The background information may refer to at least one piece of point data not related to an object among a plurality of pieces of point data included in the set of point data2100. Also, the background information may be prestored in the autonomous driving system including the LiDAR device. For example, the background information may include information on a immovable object such as a building (or an stationary object located at a fixed position) and may be prestored in the autonomous driving system including the LiDAR device in the form of a map. However, the present invention is not limited thereto, and even when the sensor is a camera device, the sensor data2000may include the set of point data2100. In this case, the set of point data2100may correspond to one frame acquired from the camera device. Also, the set of point data2100may correspond to all pixels which are acquired from the camera device and which are in the field of view of the camera device. In detail, the camera device may generate a set of point data210of one frame representing shape and/or color information of objects present in the field of view of the camera device by photographing the surroundings. For example, when a plurality of pixels are present in a light-receiving unit of a camera, the set of point data2100may include a plurality of pieces of point data2001generated for each of the plurality of pixels. However, the present invention is not limited thereto, and even when the sensor is a radar device, the sensor data2000may include the set of point data2100. In this case, the set of point data2100may include the position coordinates of all the objects which are acquired from the radar device and which are in the field of view of the radar device. For example, the set of point data2100may include a plurality of pieces of point data corresponding to a plurality of received radio waves. 2.2.3. Sub Set of Point Data Referring toFIG.19again, the sensor data2000may include a subset of point data2110. In this case, the subset of point data2110may refer to a plurality of pieces of point data that represent the same object. For example, when the set of point data2100includes a plurality of pieces of point data that represent a vehicle, the plurality of pieces of point data may constitute one subset of point data2110. Also, the subset of point data2100may be included in the set of point data2100. Also, the subset of point data2100may refer to at least one object included in the set of point data2100or at least a portion of the object. In detail, the subset of point data2110may refer to a plurality of pieces of point data that represents a first object among the plurality of pieces of point data included in the set of point data2100. Also, the subset of point data2110may be acquired by clustering at least one piece of point data related to a dynamic object among the plurality of pieces of point data included in the set of point data2100. In detail, the subset of point data2110may be acquired by detecting a immovable object and a dynamic object (or a movable object) included in the set of point data2100using the background information and then by grouping data related to one object into a certain cluster. Also, the subset of point data2110may be generated using machine learning. For example, the controller1100may determine that at least some of the plurality of pieces of data included in the sensor data2000represent the same object on the basis of machine learning performed on various objects. Also, the subset of point data2110may be generated by segmenting the set of point data2100. In this case, the controller1100may segment the set of point data2100in units of a predetermined segment. Also, at least one segment unit of the segmented set of point data may refer to at least a portion of the first object included in the set of point data2100. Also, a plurality of segment units representing the first object may correspond to the subset of point data2110. For example, when the sensor is a LiDAR device, the subset of point data2110may correspond to a plurality of pieces of point data related to the first object included in the set of point data2100acquired from the LiDAR device. FIG.20is a diagram illustrating a subset of point data acquired from at least one LiDAR device included in an autonomous driving system according to an embodiment. Referring toFIG.20, the set of point data2020may include a plurality of subsets of point data2110,2120,2130,2140, and2150. The plurality of subsets of point data2110,2120,2130,2140, and2150may include a plurality of pieces of point data representing at least a portion of an object. Here, the controller1100may determine that the plurality of pieces of point data2001represent at least a portion of the same object on the basis of the position coordinates (x, y, z) and the intensity value I of the plurality of pieces of point data2001. Accordingly, the controller1100may define the plurality of pieces of point data2001as a subset of point data and generate property data of the object on the basis of the subset of point data. For example, a first subset of point data2110may represent at least a portion of “human,” a second subset of point data2120may represent at least a portion of “vehicle,” a third subset of point data2130may represent at least a portion of “center line,” a fourth subset of point data2140may represent at least a portion of “road shoulder line,” a fifth subset of point data2150may represent at least a portion of “lane line,” and a sixth subset of point data2160may represent at least a portion of “building,” but the present invention is not limited thereto. In this case, the first subset of point data2110may refer to at least a portion of the same “human.” In detail, the first subset of point data2110may include the position coordinates (x, y, z) and the intensity values I of the plurality of pieces of point data included in the first subset of point data2110. In this case, the plurality of pieces of point data may constitute one subset of point data representing at least a portion of “human.” 2.2.4. Property Data The sensor data2000may include property data2200. In this case, the property data2200may be determined based on at least one subset of point data2110. In detail, the property data2200may include information regarding various properties, such as type, size, speed, and direction, of an object which are represented by the at least one subset of point data2110. Also, the property data2200may be data obtained by processing at least a portion of the at least one subset of point data2110. For example, when the sensor is a LiDAR device, the sensor data2000may include property data (see reference number2200ofFIG.21), and the property data may be generated based on the subset of point data2110included in the set of point data2100acquired from the LiDAR device. Also, a process of generating the property data2200on the basis of the subset of point data2110included in the set of point data2100may use a point cloud library (PCL) algorithm. As an example, a first process related to the generation of the property data2200using the PCL algorithm may include operations of preprocessing a set of point data, removing background information, detecting feature (key) points, defining a descriptor, matching the feature points, and estimating the property of an object, but the present invention is not limited thereto. In this case, the operation of preprocessing a set of point data may refer to the processing of the set of point data into a form suitable for the PCL algorithm. In the first process, point data that is included in the set of point data2100and that is not related to the extraction of property data may be removed. For example, the operation of preprocessing data may include operations of removing noise data included in the set of point data2100and re-sampling a plurality of pieces of point data included in the set of point data2100, but the present invention is not limited thereto. Also, through the operation of removing background information, in the first process, the subset of point data2110related to the object may be extracted by removing the background information included in the set of point data2100. Also, through the operation of detecting feature points, in the first process, a feature point suitably representing the shape characteristics of the object may be detected among a plurality of pieces of point data included in the subset of point data2110related to the object, which remains after the background information is removed. Also, through the operation of defining the descriptor, in the first process, a descriptor for describing a characteristic unique to each of the detected feature points may be defined. Also, through the operation of matching the feature points, in the first process, corresponding feature points may be chosen by comparing a descriptor of feature points included in prestored template data related to the object and a descriptor of feature points of the subset of point data2110. Also, through the operation of estimating the property of an object, in the first process, the property data2200may be generated by detecting an object represented by the subset of point data2110using a geometric relationship of the chosen feature points. As another example, a second process related to the generation of the property data2200may include operations of preprocessing data, detecting data regarding an object, clustering the data regarding the object, classifying the clustered data, tracking the object, etc., but the present invention is not limited thereto. In this case, through the operation of detecting data regarding an object, in the second process, a plurality of pieces of point data representing an object among a plurality of pieces of point data included in the set of point data2100may be extracted using prestored background data. Also, through the operation of clustering the data regarding the object, in the second process, a subset of point data2110may be extracted by clustering at least one piece of point data representing one object among the plurality of pieces of point data. Also, through the process of classifying the clustered data, in the second process, the class information of the subset of point data2110may be classified or determined using a machine learning model or a deep learning module which is pre-learned. Also, through the operation of tracking the object, in the second process, the property data2200may be generated based on the subset of point data2110. For example, a controller that performs the second process may display the position of the object using the center position coordinates and volume of the plurality of subsets of point data2110. Accordingly, the controller may estimate the movement direction and speed of the object by defining a correspondence relationship based on information on the similarity in distance and shape between a plurality of subsets of point data acquired from successive frames and then by tracking the object. FIG.21is a diagram illustrating property data generated from a subset of point data acquired from a LiDAR device included in an autonomous driving system according to an embodiment. Referring toFIG.21, the property data2200may be generated for each point data2001included in the subset of point data2110. In detail, the property data2200may be assigned to each piece of point data2001included in the subset of point data representing at least a portion of one object. For example, the property data2200of the subset of point data may be generated for each piece of point data2001included in the subset of point data2110representing at least a portion of a human. In this case, the property data2200may include a class information, a center position information, a size information of the like of the human, but the present invention is not limited thereto. A plurality of pieces of information included in the property data will be described in detail below. FIG.22is a diagram showing another example of property data ofFIG.21. Referring toFIG.22, the property data2200may be generated in common for a plurality of pieces of point data included in the subset of point data2110. That is, one piece of property data2200may be generated for one subset of point data2110representing at least a portion of one object. For example, when the object is a human, one piece of property data may be generated for a plurality of pieces of point data included in a subset of point data representing at least a portion of the human. FIG.23is a diagram illustrating a plurality of pieces of information included in property data according to an embodiment. Referring toFIG.23, the property data2200may include a class information2210, a center position information2220, a size information2230, a shape information2240, a movement information2250, an identification information2260of the like of the object which are represented by the subset of point data2110, but the present invention is not limited thereto. Hereinafter, a plurality of pieces of information included in the property data2200will be described in detail. The property data2200may include a class information2210indicating the class of the object represented by the subset of point data2110. FIG.24is a diagram illustrating a class information included in property data according to an embodiment. Referring toFIG.24, the class information2210may include a class related to the type of the object, a class related to the type of a portion of the object, a class related to a situation of a region including the object, etc., but the present invention is not limited thereto. In this case, the class information2210may be associated with the type of the object represented by the subset of point data2110. In this case, the class information related to the type of the object may be determined depending on the kind of the object. For example, when the object is a human, the class information of the subset of point data may be determined as “human,” but the present invention is not limited thereto. The class information2210may be determined as a lower class of the human. As a specific example, when the object is a male, the class information2210of the subset of point data may be determined as “male.” Also, the lower class of the human may include “female,” “child,” “the elderly,” “pedestrian,” etc., but the present invention is not limited thereto. Also, the class information2210may be associated with the type of a portion of the object. In detail, fora class related to the type of a portion of the object, when the set of point data2100includes the subset of point data2110representing a portion of the object, the controller1100may determine that the subset of point data2110represents a portion of the object. For example, when the subset of point data2110represents a human arm, the class information2210of the subset of point data may be determined as “human” or may be determined as “human arm.” Also, the class information2210may be associated with the situation of the region including the object. In this case, the class related to the situation of the region including the object may be determined based on a plurality of subsets of point data. In detail, the controller1100may determine the class information2210of the object on the basis of the subset of point data representing at least a portion of the object, and the controller1100may determine class information related to the situation of the region including the object in consideration of both of the subset of point data2110and another plurality of subsets of point data. As a specific example, when a LiDAR device acquires a plurality of subsets of point data representing at least a portion of a worker and an excavator that are working at a construction site, the controller1100may determine that the class information of the worker and the excavator is “construction site” on the basis of the plurality of subsets of point data. Also, the class information2210may be determined based on a lookup table prestored in the autonomous driving system1000. More specifically, the autonomous driving system1000may generate and store a lookup table that matches objects to the class information2210of the objects. In this case, the controller1100may determine the class information2210of the subset of point data on the basis of the lookup table. In this case, the lookup table may be used to determine a class related to the situation of the region including the object. For example, the lookup table may match the class information of a plurality of objects to a class related to a situation of a region including the plurality of objects. As a specific example, when the class information of the plurality of objects includes at least a portion some of “worker,” “excavator,” and “construction sign,” the lookup table may match the plurality of pieces of class information to “construction site,” which is a class related to the situation of the region including the plurality of objects. In this case, the controller may determine that the class of the plurality of objects is “construction site” using the lookup table. Also, the class information2210may be determined using machine learning. In detail, the autonomous driving system1000may pre-learn a correspondence relationship by repeatedly matching the subset of point data2110to an object represented by the subset of point data and may determine the class information2210of the object on the basis of the correspondence relationship. Also, the class information2210may include at least one class. As an example, the controller may determine the class information2210of the subset of point data as one class (e.g., “human”). Also, as another example, a plurality of classes (e.g., “human” and “construction site”) instead of one class may be included in the class information2210. Also, the class information2210may include a class group including at least one class. Here, the class group may refer to a collection of classes having similar or common characteristics. In this case, the class group may be preset and stored by a controller or a user, but the present invention is not limited thereto. As an example, classes such as “human,” “vehicle registration plate,” and “identity document” have a common characteristic in that the classes are related to personal information and thus may constitute a class group related to the personal information. As another example, classes such as “human” and “vehicle” have a common characteristic in that the classes are related to a movable object and thus may constitute a class group related to the movable object. Also, referring toFIG.23, the property data2200may include a center position information2220of the subset of point data. FIG.25is a diagram illustrating a center position information included in property data according to an embodiment. Referring toFIG.25, the center position information2220may be computed based on a subset of point data2110representing at least a portion of an object included in the set of point data2100. For example, the center position information2220may refer to the position coordinates (x, y, z) and center position coordinates (xo, yo, zo) of each of a plurality of pieces of the point data included in the subset of point data2110. In this case, the center position coordinates (xo, yo, zo) may be coordinates indicating the average of the position coordinates (x, y, z) of the plurality of pieces of point data, but a method of computing the center position coordinates (xo, yo, zo) is not limited thereto and may be used in various ways. Also, the center position information2220may be expressed in a coordinate system with at least one reference position as the origin. For example, the reference position may include the position of a LiDAR device configured to acquire point data, the position of an apparatus including the LiDAR device, and the like, and the center position information2220may be expressed in a coordinate system with the reference position as the origin, but the present invention is not limited thereto. The coordinate system and the origin, which serves as a reference, will be described in detail below. Also, referring toFIG.23, the property data2200may include a size information2230of the sub set of point data. FIG.26is a diagram illustrating a size information included in property data according to an embodiment. Referring toFIG.26, the size information2230may correspond to the size of an object represented by the subset of point data2110. In this case, the size information2230may be computed based on the subset of point data2110indicating at least a portion of an object included in the set of point data2100. For example, the size information2230may be computed based on a volume that the subset of point data2110occupies in the set of point data2100. In detail, the controller1100may extract a space that the subset of point data2110occupies in the set of point data2100and may compute size information2230of the object by computing the volume of the extracted space. Also, the size information2230may be computed based on position information of the plurality of pieces of point data included in the subset of point data2110. In detail, since the plurality of pieces of point data represent the surface of the object, the size information2230may be acquired by computing the volume of the object using the position information of the point data representing the surface of the object. Also, the size information2230may be computed based on the center position information2220and the subset of point data2110. For example, the size information2230may be generated by computing the volume of a rectangular parallelepiped shape having a center at the center position coordinates (xo, yo, zo) included in the center position information2220and having a width, a length, and a height corresponding to the width, length and height of the subset of point data2110. It will be appreciated that the size information2230may be computed by computing the volume of various shapes such as not only a rectangular parallelepiped but also a cube, a polyhedron, a sphere, and an ellipse. Also, referring toFIG.23, the property data2200may include a shape information2240of the subset of point data. In this case, the shape information2240may indicate the shape of the object represented by the subset of point data2110. Here, the shape of the object may include the actual shape of the object and may also include a processed shape that is expressed by processing the shape of the object. Here, the processed shape may include a similar shape that is expressed as being similar to the actual shape of the object and an arbitrary shape that is different from the actual shape of the object but indicates the presence of the object. For example, the shape information2240may include a template information2241in which the object is represented using a predetermined shape when representing the arbitrary shape and may include a skeleton information2242in which the object is represented using a predetermined number of points or less when representing the similar shape, but the present invention is not limited thereto. FIG.27is a diagram illustrating a template information of shape information included in property data according to an embodiment. Referring toFIG.27, the template information2241may represent an object represented by the subset of point data2110using a predetermined shape. In detail, the template information2241may indicate a predetermined shape corresponding to the class information2210on the basis of the class information of the subset of point data. For example, when the class information2210of the subset of point data is related to a human, the template information2241may correspond to a predetermined shape having a human shape, but the present invention is not limited thereto. Also, the template information2241may be prestored in the autonomous driving system1000. In detail, the autonomous driving system1000may prestore the template information2241corresponding to the class information2210of the object or acquire from an external server. FIG.28is a diagram illustrating a skeleton information of shape information included in property data according to an embodiment. Referring toFIG.28, the skeleton information2242may represent an object represented by the subset of point data2110using a predetermined number or less of points. In detail, the skeleton information2242may represent the shape of the object using the minimum number of points capable of expressing the shape of the object on the basis of the class information2210of the subset of point data. For example, when the class information of the subset of point data is related to a human, the skeleton information may correspond to a plurality of points corresponding to a human joint, but the present invention is not limited thereto. Also, referring toFIG.23, the property data2200may include a movement information2250of the subset of point data. In this case, the movement information2250may include the movement direction, speed, tracking information, and the like of the object represented by the subset of point data2110, but the present invention is not limited thereto. Also, the movement information2250may be generated by defining a correspondence relationship between the positions of the same object in successive frames. Here, defining the correspondence relationship between the positions of the same object in successive frames means specifying the same object in each of the successive frames, acquiring position information of the specified object, and associating the acquired position information with a position of the specified object with time. For example, the movement information2250may be generated by the controller1100through a predetermined algorithm. The algorithm may include acquiring a first set of point data corresponding to a first frame of at least one sensor, acquiring a second set of point data corresponding to a second frame following the first frame, extracting a first subset of point data representing a first object included in the first set of point data, extracting a second subset of point data representing the first object included in the second set of point data, defining a correspondence relationship between the subsets of point data on the basis of similarity in distance or shape between the first subset of point data and the second subset of point data, and generating a movement direction, speed, and the like of the first object on the basis of position information of the subsets of point data, but the present invention is not limited thereto. Also, by accumulating the movement directions and speeds of the first object which are generated for a plurality of frames, the controller1100may generate tracking information of the first object. Also, referring toFIG.23, the property data2200may include an identification information2260of the subset of point data. In this case, the identification information2260may be generated to distinguish the subset of point data2110from other sets of point data. Also, the identification information2260may be generated to express that a plurality of pieces of point data included in the subset of point data2110represent the same object. In detail, the identification information2260may include a common ID of the subset of point data2110. Also, the ID may be generated for each of a plurality of pieces of point data included in the subset of point data2110. In this case, the ID may be expressed with at least one serial number, but the present invention is not limited thereto. Hereinafter, a method of the autonomous driving system1000controlling a vehicle using the sensor data2000will be described. 2.3. Vehicle Control Using Sensor Data The controller1100included in the vehicle equipped with the autonomous driving system1000may control the vehicle using sensor data acquired from the at least one sensor1300. For example, the controller1100may match the sensor data to a high-precision map (or a high-definition (HD) map), control the direction and speed of the vehicle, or control the path of the vehicle, but the present invention is not limited thereto. Here, the high-definition map refers to a map in which a immovable object or a dynamic object is shown with high precision (e.g., precision at the level of a centimeter) for driving a vehicle and may be expressed in 2D or 3D. Hereinafter, a specific embodiment of vehicle control using the sensor data will be described. 2.3.1. Matching of Sensor Data to High-Definition Map The controller1100included in the autonomous driving system1000may update a high-definition map by matching sensor data2000to the map. In detail, the controller1100may match position information of at least one object acquired from the at least one sensor1300to a high-definition map1420downloaded from the outside. Here, details on how to generate the high-definition map1420have been described in Section 1.3.5, and thus will be omitted here. FIG.29is a diagram showing that an autonomous driving system matches a subset of point data acquired from a sensor to a high-definition map according to an embodiment. Referring toFIG.29, the controller1100may match a plurality of subsets of point data2110and2120acquired from the at least one sensor1300to the high-definition map1420and then display the matching result. In detail, the controller1100may compare position information included in the plurality of subsets of point data2110and2120to position information of environments surrounding the plurality of subsets of point data in the high-definition map1420, match the plurality of subsets of point data2110and2120to the high-definition map1420, and display the matching result. For example, the controller1100may match a first subset of point data representing at least a portion of a human and a second subset of point data representing at least a portion of a vehicle to the high-definition map1420. FIG.30is a diagram showing that an autonomous driving system matches property data of an object to a high-definition map according to an embodiment. Referring toFIG.30, the controller1100may match a plurality of pieces of property data2201and2202generated based on the plurality of subsets of point data2110and2120to the high-definition map1420and display the matching result. More specifically, the controller1100may generate the plurality of pieces of property data2201and2202without matching the plurality of subsets of point data2110and2120acquired from the at least one sensor1300to the high-definition map1420. In this case, the controller1100may match the plurality of pieces of property data2201and2202to the high-definition map1420and display the matching result. For example, the controller1100may generate first property data2201on the basis of the first subset of point data2110representing at least a portion of a human and generate second property data2202on the basis of the second subset of point data2120representing at least a portion of a vehicle. Here, the first property data2201includes shape information of the human, and the second property data2202includes shape information of the vehicle. Thus, the controller1100may match the plurality of pieces of shape information to the high-definition map1420and display the matching result. Also, the plurality of pieces of property data2201and2202are not limited to the shape information and may refer to various pieces of information included in the property data such as center position information and size information. Also, the controller may control a vehicle using the high-definition map1420to which the plurality of subsets of point data2110and2120or the plurality of pieces of property data2201and2202are matched. For example, the controller may determine whether an obstacle is present on the path of the vehicle on the basis of the high-definition map1420and may control the speed, direction, or path of the vehicle according to the determination. 2.3.2. Control of Direction and Speed of Vehicle Also, the controller1100included in the autonomous driving system1000may control the direction and speed of the vehicle equipped with the autonomous driving system1000using the sensor data2000. In detail, when an obstacle is found on the path of the vehicle through the at least one sensor, the controller1100may control the direction and speed of the vehicle in order to avoid the corresponding obstacle. For example, when a pedestrian is detected on the path of the vehicle, the controller1100may stop the vehicle or control a steering device to change the direction of the vehicle in order to avoid the pedestrian. 2.3.3. Path Control for Vehicle Also, the controller1100included in the autonomous driving system1000may control the path of the vehicle using the sensor data2000. FIG.31is a diagram showing a situation in which an autonomous driving system changes a path to avoid an obstacle obstructing the driving of a vehicle according to an embodiment. Referring toFIG.31, when the movement of a pedestrian is detected on the driving path of a vehicle121equipped with the autonomous driving system1000, the controller1100may change the path of the vehicle121in order to avoid the pedestrian. In detail, the controller1100may stop the vehicle in order to avoid a collision between the vehicle121and the pedestrian. However, the present invention is not limited thereto, and the controller1100may modify the path of the vehicle so that the vehicle can travel away from the pedestrian. Vehicle path planning will be described in detail below (in Section 5.2.2.2). 3. Data Sharing System A data sharing system according to an embodiment may include a first device and a second device, each of which includes a communication module. Also, the first device may share data with the second device. In this case, the type of sharing data is not limited and may include sensor data. For example, a vehicle equipped with an autonomous driving system may share data with other devices using the data sharing system in order to avoid a risk that may occur during the driving of the vehicle. 3.1. Data Sharing Entity A device including at least one communication module may be a data sharing entity. In detail, the data sharing entity may be a transmission entity that transmits data or a reception entity that receives data. Also, the data sharing entity may include a vehicle, an infrastructure device, a server, etc., but the present invention is not limited thereto. Also, the data sharing entity may include a plurality of sensors included in one device or a plurality of sensors included in different devices. FIG.32is a diagram showing a situation in which data is shared between a plurality of devices according to an embodiment. Referring toFIG.32, a plurality of devices100,400, and700may share data with each other. In this case, the plurality of devices100,400, and700may include at least one communication module1200to perform communication. In this case, the plurality of devices may include a vehicle100, an infrastructure device700, a server (cloud)400, a mobile device, etc., but the present invention is not limited thereto. For example, the vehicle100may share data with other devices through a V2V system. Also, the vehicle100may share data with the infrastructure device700through a V2I system. Also, the vehicle100may share data with the server400through a V2C system. In this case, the vehicle100may transmit sensor data2000acquired from at least one sensor1300included in the vehicle100to another vehicle, the infrastructure device700, or the server400. Also, the vehicle100may receive sensor data from the other vehicle, the infrastructure device700, or the server400. 3.2. Data Sharing Time Also, data sharing between a plurality of devices each including at least one communication module may be performed at different times depending on the situation. For example, the time of data sharing between the plurality of devices may include a communication start time point, a specific-event occurrence time point, or the like, but the present invention is not limited thereto. As a specific example, the time of data sharing between a first device and a second device may correspond to a start time point of communication between the first device and the second device. In this case, when the distance between the first device and the second device reaches an available communication distance, the first device and the second device may start communication and may share data when the communication is started. As another example, the data sharing between the first device and the second device may be performed when the first device is located within a certain range from the second device. In this case, the certain range may be different from the available communication distance and may be preset by controllers of the first device and the second device or an external server. As still another example, the data sharing between the first device and the second device may be performed when an event related to the first device occurs. In detail, it is assumed that an accident occurs in relation to the first device, upon the occurrence of the accident, the second device may transmit data related to the accident to the first device. As yet another example, the data sharing between the first device and the second device may be performed when the first device receives a data request message from the second device. In detail, the second device may transmit a data request message to the first device, and the first device may transmit data to the second device in response to the request message. As yet another example, the data sharing between the first device and the second device may be performed when the first device gains approval for data transmission from an external server. In detail, the first device may obtain permission for transmission of data related to personal information from an external server before transmitting the data related to the personal information, and the first device may transmit the data to the second device when the external server approves data transmission. As yet another example, the data sharing between the first device and the second device may be performed when the first device enters a specific region. In detail, when the first device enters a specific regulation region such as a child protection zone, the second device may transmit data related to the specific region to the first device. As yet another example, the data sharing between the first device and the second device may be performed when a user of the first device enters an input related to data sharing. In detail, when the first device receives an input for sharing data with the second device from a user who is in the first device, the first device and the second device may transmit or receive data. Hereinafter, sharing data transmitted or received when data is shared will be described in detail. 3.3. Sharing Data 3.3.1. Definition of Sharing Data In the specification, sharing data3000may be defined as a concept including all sharing data when the data is shared between two or more devices. In this case, a first device may transmit the sharing data3000to a second device. Also, the first device may receive the sharing data3000from the second device. For example, the sharing data3000may include sensor data acquired through a sensor placed in the first device, but the present invention is not limited thereto. 3.3.2. Content of Sharing Data. The content of the sharing data may be understood as a concept including the content or type of data included in the sharing data3000. In other words, the content of the sharing data forms the sharing data3000, and the sharing data3000is determined according to the type of the data included in the content of the sharing data. FIG.33is a diagram showing the content types of sharing data which may be included in the sharing data according to an embodiment. FIG.34is a diagram specifically showing the content of the sharing data ofFIG.33. Referring toFIGS.33and34, the sharing data3000may include various types of data as content. For example, the content of the sharing data may include sensor data2000acquired from at least one sensor. In other words, a controller included in the first device may generate sharing data3000on the basis of the sensor data2000. In this case, the content of the sharing data may include a set of point data3100, point data3101, a subset of point data3110, property data3200, privacy protection data3300, or the like, but the present invention is not limited thereto. In this case, the privacy protection data3300will be described in detail below. Also, the content of the sharing data may include other data including information regarding a data sharing entity. For example, a vehicle including the at least one communication module1200may share the sharing data3000including information regarding the vehicle with other devices. For example, the content of the sharing data may include the other data3400in addition to the sensor data2000, and the other data3400may include the destination, speed, and size of the vehicle, the number of occupants in the vehicle, etc., but the present invention is not limited thereto. 3.4. Processing of Received Sharing Data A device which has received the sharing data3000may generate various pieces of information using the sensor data2000and the sharing data3000. For example, a device which has received the sharing data3000may recognize an object represented by the sensor data2000and the sharing data3000using the sensor data2000and the sharing data3000. FIG.35is a diagram showing a situation in which sensor data is shared between a vehicle and an infrastructure device. Referring toFIG.35, a first vehicle122and an infrastructure device700may share sensor data acquired through at least one sensor (e.g., a LiDAR device) each included in the first vehicle122and the infrastructure device700. 3.4.1. Method of Processing Received Sharing Data According to Type Referring toFIG.35again, the infrastructure device700may transmit sharing data3000including sensor data acquired through at least one sensor to the first vehicle122. For example, the infrastructure device700may transmit sharing data including a set of point data or transmit sharing data3000including property data. However, the present invention is not limited thereto, and the content of the sharing data may or may not include both of the set of point data and the property data. In this case, the first vehicle122may process the sharing data3000in different manners depending on the type of content of the sharing data. Hereinafter, embodiments in which the first vehicle122processes the sharing data3000when the infrastructure device700transmits a set of point data and when the infrastructure device700transmits property data will be described. 3.4.1.1. Case of Transmitting Set of Point Data FIG.36is a diagram illustrating a situation in which a set of point data is included in the content of sharing data according to an embodiment. Referring toFIG.36, the infrastructure device700may transmit sharing data3000including a first set of point data3100acquired from a sensor to the first vehicle122. In this case, the first set of point data3100may include a first subset of point data3110representing at least a portion of a second vehicle123and a second subset of point data3120representing at least a portion of a pedestrian800. Also, the first vehicle122may acquire a second set of point data2100through at least one sensor. In this case, the second set of point data2100may include a third subset of point data2110representing at least a portion of the second vehicle123. Also, the pedestrian800who is located in the field of view of the sensor of the first vehicle122is covered by the second vehicle123, and thus the second set of point data2100may not include a subset of point data representing at least a portion of the pedestrian800. Also, through the data sharing system according to an embodiment, the first vehicle122may acquire information regarding an object that is not included in the sensor data. For example, when the first vehicle122cannot acquire sensor data regarding the pedestrian800through at least one sensor, the first vehicle122cannot recognize the pedestrian800, which may cause an unexpected accident related to the first vehicle122. In order to prevent the above situation, the infrastructure device700may share sensor data related to the pedestrian800, which cannot be acquired by the first vehicle122, with the first vehicle122. FIG.37is a diagram illustrating a method of processing, by a first vehicle, a shared first set of point data and a second set of point data according to an embodiment. Referring toFIG.37, a controller of the first vehicle122may recognize at least one object included in the field of view of a sensor of the first vehicle122using a second set of point data2100and a shared first set of point data3100. In detail, a controller1100included in the first vehicle122may generate third property data2201on the basis of a third subset of point data2110included in the second set of point data2100. Here, the property data2201may include a class information, a center position information, a size information, etc. of the second vehicle123which are represented by the third subset of point data2110, but the present invention is not limited thereto. Also, the controller1100may generate a first property data3201and a second property data3202on the basis of the first subset of point data3110and the second subset of point data3120included in the first set of point data received from the infrastructure device700. In this case, the first property data3201may include class information, center position information, size information, etc. of the second vehicle123which are represented by the first subset of point data3110, but the present invention is not limited thereto. Also, the second property data3202may include class information, center position information, size information, etc. of the pedestrian800which are represented by the second subset of point data3120, but the present invention is not limited thereto. FIG.38is a diagram illustrating a method of processing, by a first vehicle, a shared set of point data and a second set of point data according to another embodiment. Referring toFIG.38, the controller of the first vehicle122may generate a third set of point data4100using the second set of point data2100and the shared first set of point data3100to recognize at least one object included in the field of view of the sensor. In this case, the third set of point data4100may be generated by aligning the coordinate system of the shared first set of point data3100with the coordinate system of the second set of point data2100. The coordinate system alignment will be described in detail below (in Section 3.4.2). Also, the third set of point data4100may include a fourth subset of point data4110representing the second vehicle123and a fifth subset of point data4120representing the pedestrian800. Also, the controller1100may generate fourth property data4201on the basis of the fourth subset of point data4110and may generate fifth property data4202on the basis of the first subset of point data4120. In this case, the fourth property data4201may include class information, center position information, size information, etc. of the second vehicle123which are represented by the fourth subset of point data4110, but the present invention is not limited thereto. Also, the fifth property data4202may include class information, center position information, size information, etc. of the pedestrian800which are represented by the fifth subset of point data4120, but the present invention is not limited thereto. 3.4.1.2. Case of Receiving Property Data FIG.39is a diagram illustrating a situation in which property data is included in the content of sharing data according to an embodiment. Referring toFIG.39, the infrastructure device700may transmit, to the first vehicle122, sharing data3000including a plurality of pieces of property data3200generated based on a plurality of subsets of point data included in a set of point data acquired from a sensor. When the sharing data3000is received, the controller1100of the first vehicle122may control the first vehicle122using the sharing data3000. A method in which the first vehicle122that has received the sharing data3000including the plurality of property data3200processes the sharing data3000will be described in detail below (in Section 5). 3.4.1.3. Case of Receiving Event Occurrence-Related Information Referring toFIG.35again, the server400, the vehicles122and123, and the infrastructure device700, each of which includes a communication module, may share sharing data3000including event occurrence-related information. For example, the server400may transmit event-related information including information indicating that a traffic event has occurred on the path of the first vehicle122to the first vehicle122. A method in which the first vehicle122that has received the sharing data3000including the event occurrence-related information processes the sharing data3000will be described in detail below (in Section 4.2). 3.4.2. Coordinate System Alignment for Shared-Data Matching When a first device receives sharing data from a second device, a controller1100of the first device may match the coordinate system of sensor data acquired from a sensor placed in the first device to the coordinate system of the sharing data in order to match the sensor data to the sharing data (data registration). In this case, the coordinate system may include a Cartesian coordinate system, a polar coordinate system, a cylindrical coordinate system, a homogeneous coordinate system, a curved coordinate system, an inclined coordinate system, a log-polar coordinate system, or the like, but the present invention is not limited thereto. For example, a first device including a first LiDAR device may acquire first sensor data through the first LiDAR device. Also, a second device including a second LiDAR device may acquire second sensor data through the second LiDAR device. In this case, the first LiDAR device may include a first local coordinate system having a first LiDAR-optical origin as the origin. Also, the second LiDAR device may include a second local coordinate system having a second LiDAR-optical origin as the origin. Here, when a controller of the second device transmits sharing data including the second sensor data to the first device, the controller of the first device may set the first local coordinate system as a global coordinate system. Also, after receiving the shared second sensor data shown in the second local coordinate system, the controller may align the second local coordinate system with the global coordinate system in order to perform matching on the second sensor data. Also, in some embodiments, the controller may align the second local coordinate system with the first local coordinate system or align the first local coordinate system with the second local coordinate system. It will be appreciated that the first local coordinate system is the same as the second local coordinate system. Also, in order to align the second local coordinate system with the global coordinate system in a 3D space, the controller may compute a 4×4 transformation matrix with a total of six degrees of freedom (6DOF) by summing a 3D vector for translation and a 3D vector for rotation. Also, the controller may transform the second sensor data shown in the second local coordinate system to the global coordinate system using the transformation matrix. As an example, when the first device is fixed, the alignment between the second local coordinate system and the local coordinate system may be performed by computing a transformation relationship between the coordinate systems. That is, the controller may transform the sensor data shown in the second coordinate system into the global coordinate system using the transformation matrix to show the sensor data in a unified communication system. As another example, in order to align the second local coordinate system with the local coordinate system in a 3D space, the controller may use a first object having a unique shape as a criterion for the alignment. For example, the unique shape may include a shape in which three planes meet in the first object, but the present invention is not limited thereto. In detail, the controller may align the second local coordinate system with the local coordinate system on the basis of the position of a first object included in second sensor data shown in the second local coordinate system and the position of the unique shape of a first object included in first sensor data shown in the global coordinate system. Specifically, the controller may generate an initial position by matching the position of the first object shown in the global coordinate system and the position of the first object shown in the second local coordinate system. In this case, the initial position may be acquired by initially aligning the positions of the unique shape of the first object included in different pieces of sensor data with the global coordinate system. That is, the initial position alignment process may be understood as the initial coordinate system alignment. Also, when position information (e.g., an initial portion) of the first object acquired from different devices shown in the local coordinate system is incorrect, the controller can improve the position information of the first object through optimization. In this case, the controller may use an iterative closest point (ICP) algorithm to optimize the initial position, but the present invention is not limited thereto. 3.5. Vehicle Control Using Sharing Data A controller included in a vehicle that has received sharing data may control the vehicle using the sharing data and sensor data acquired from a sensor of the vehicle. In this case, it will be appreciated that the embodiment of vehicle control using sensor data, which has been described in Sections 2.3.1 to 2.3.3, can also be implemented using sharing data. In detail, the controller may match the sharing data, which is received from another device, to a high-definition map included in the vehicle and display the matching result. Also, the controller may control the direction and speed of the vehicle using the sharing data received from another device. Also, the controller may control the path of the vehicle using the sharing data received from another device. 4. Selective Sharing of Sensor Data 4.1. Selective Sharing of Sensor Data According to Property Data A data sharing system according to an embodiment may include a first device and a second device. Also, the first device may transmit sharing data to the second device. In this case, the content of the sharing data transmitted by the first device may differ depending on an object recognition result included in sensor data acquired by the first device. Here, the object recognition result may refer to a class information of the object. For example, when the class of the object included in the class information is related to a building, the content of the sharing data may include a subset of point data representing the building. Also, when the class of the object included in the class information is a class in which personal information needs to be protected, the content of the sharing data may include property data of a subset of point data representing the object. Here, the class in which personal information needs to be protected refers to a class in which personal information may be exposed, such as a human, a vehicle number plate, and an identity document and the class in which personal information needs to be protected may be predetermined by the controller. That is, the controller of the first device may selectively generate sharing data according to the class information of the object included in the sensor data. 4.1.1. Necessity of Selective Sharing of Sensor Data According to Property Data In the data sharing system according to an embodiment, privacy may be unjustly invaded when data related personal information is randomly shared between a plurality of devices. For example, when a photo including a person's face is transmitted to another device without any processing, his or her privacy may be invaded when shape and color information related to his or her face is shared. Also, even when the sensor is a LiDAR device, privacy invasion can be an issue. In detail, sensor data acquired from the LiDAR device may include intensity information of an object. Here, since the intensity information includes an intensity value that is different depending on the reflectance of the object, a controller connected to the LiDAR device may determine a human face using the intensity information. Thus, even when sensor data acquired from the LiDAR device is shared between a plurality of devices without being processed, privacy invasion can be an issue. Thus, a method of selectively sharing sensor data according to an object class may be required when the sensor data is shared. In a data sharing system according to another embodiment, a device including at least one sensor may selectively share sensor data in order for a device for generating a high-definition map to efficiently update the high-definition map. In an embodiment, a high-definition map that is initially generated may require sensor data for movable objects such as people rather than sensor data for immovable objects such as buildings. Accordingly, a device for transmitting the sensor data may select only data related to immovable objects from the sensor data and transmit the data to the device for generating the high-definition map. In the data sharing system according to still another embodiment, information on immovable objects may be prestored in a high-definition map. In this case, the device for transmitting the sensor data may select only data related to movable objects from the sensor data and transmit the data to the device for generating the high-definition map. In this case, the device for generating the high-definition map may generate a high-definition map including both of an immovable object and a movable object by additionally acquiring data related to the mobile object in addition to the prestored information on the immovable objects. 4.1.2. Various Embodiments of Selective Sharing Method of Sharing Data Including Privacy Protection Data. In order to solve the above-described privacy invasion issue, the sharing data may include privacy protection data. Here, the privacy protection data may be data obtained by processing a personal information identification-related part in a plurality of subsets of point data included in a set of point data. The privacy protection data will be described in detail below (in Section 4.1.2.1.3). 4.1.2.1. Selective Sharing Method According to Embodiment A data sharing system according to an embodiment may include a first device and a second device, each of which includes at least one communication module for performing communication. In this case, the first device and the second device may include a vehicle, a server, an infrastructure device, a mobile device, or the like, but the present invention is not limited thereto. FIG.40is a flowchart illustrating a selective sharing method of sensor data according to an embodiment. Referring toFIG.40, a controller of a first device may obtain a set of point data2100through at least one sensor (S5001). In this case, the set of point data2100may correspond to a point cloud acquired through a LiDAR device. Also, the first device may include a vehicle, an infrastructure device, a server, a mobile device, etc., but the present invention is not limited thereto. Also, the controller may determine property data of a plurality of subsets of point data included in the set of point data (S5002). Also, the controller may determine class information of an each object represented by each of the plurality of subsets of point data (S5003). Also, the controller may change the content of the sharing data according to whether the class of the object included in the class information is a class in which personal information needs to be protected (S5004). Also, the controller may generate sharing data including privacy protection data when the class of the object included in the class information is a class in which personal information needs to be protected (S5005) and may generate sharing data not including privacy protection data when the class of the object included in the class information is not a class in which personal information needs to be protected (S5006). Also, the controller may transmit the generated sharing data to a second device (S5007). The operations described inFIG.40will be described in detail below on the assumption that the first device is a first vehicle124. 4.1.2.1.1. Acquisition of Sensor Data Referring toFIG.40again, a controller of the first vehicle124may obtain a set of point data through at least one sensor (S5001). In this case, the set of point data may include a plurality of pieces of point data. Also, the set of point data may include a plurality of subsets of point data representing at least a portion of an object. Also, the at least one sensor may include a LiDAR device, a camera device, a radar device, an ultrasonic sensor, or the like, but the present invention is not limited thereto. FIG.41is a diagram showing a situation in which a first vehicle acquires sensor data to selectively share the sensor data according to an embodiment. FIG.42is a diagram schematically representing the sensor data acquired by a first vehicle through a LiDAR device inFIG.41in a 2D plane. Referring toFIGS.41and42, the controller of the first vehicle124may acquire a set of point data2101including a plurality of subsets of point data2111and2112through at least one sensor. In this case, the controller of the first vehicle124may extract the plurality of subsets of point data2111and2112included in the set of point data2101and may determine property data including class information of the plurality of subsets of point data2111and2112(S5002, S5003). In detail, the controller may extract a first subset of point data2111representing at least a portion of a third vehicle126and a second subset of point data2112representing at least a portion of a pedestrian800from the set of point data2101. Also, the controller may acquire the first subset of point data2111and the second subset of point data2112in the scheme described in Section 2.2.3. FIG.43is a diagram showing class information and property data of a plurality of subsets of point data included in sensor data according to an embodiment. Referring toFIG.43, the controller may determine a plurality of pieces of property data2201and2202corresponding to the plurality of subsets of point data2111and2112on the basis of the plurality of subsets of point data2111and2112, respectively. More specifically, the controller may determine first property data2201including first class information2211on the basis of the first subset of point data2111. In this case, the first class information2211may represent “vehicle.” However, the present invention is not limited thereto, and the first class information2211may be determined as “passenger car,” which is a subclass of “vehicle.” Also, the controller may determine second property data2202including second class information2212on the basis of the second subset of point data2120. In this case, the second class information2212may represent “human.” However, the present invention is not limited thereto, and the second class information2212may be determined as “pedestrian,” which is a subclass of “human.” Also, the controller may acquire a plurality of pieces of property data2201and2202including a plurality of pieces of class information2211and2212in the scheme described in Section 2.2.4. 4.1.2.1.2. Selective Generation and Sharing of Sharing Data Also, the controller may generate sharing data3000in order to transmit the sensor data2000to a second device. In this case, in order not to share data related to privacy, a criterion for determining the content of the sharing data may be required. For example, the sharing data3000may be generated differently depending on class information of a plurality of subsets of point data2111and2112included in the sensor data2000. Here, the controller may determine the content of the sharing data according to whether the class information is related to personal information identification. However, the present invention is not limited thereto, and the controller may determine the content of the sharing data on the basis of the plurality of pieces of property data2201and2202. FIG.44is a diagram showing the content of sharing data transmitted by a first vehicle according to an embodiment. Referring toFIG.44, the controller of the first vehicle124may determine the content of sharing data on the basis of class information of a plurality of objects included in the set of point data2101. Also, the controller may determine the content of the sharing data according to whether the property data is related to personal information identification. In detail, the controller may determine the content of the sharing data according to whether the class of an object included in the class information is a class in which personal information needs to be protected. As an example, the controller may determine the content of sharing data according to whether the class information is related to a human. In this case, the controller may generate sharing data that does not include at least one piece of point data representing a human face. Also, the controller may generate sharing data including data obtained by processing the at least one piece of point data representing the human face. As another example, the controller may not add data related to a vehicle number plate among sensor data related to a vehicle to the content of the sharing data. Also, the controller may generate sharing data including data obtained by processing at least one piece of point data representing the number plate of the vehicle. Also, the controller may determine the content of the sharing data according to whether the class information of the object matches at least one class included in a class group related to personal information. In this case, the class group may be a collection of classes including at least one class that satisfies a preset criterion. For example, the class group related to personal information may include a class related to a human, a class related to a number plate, a class related to an identity document, or the like, but the present invention is not limited thereto. For example, when class information of an object acquired through at least one sensor is determined as “human,” the controller may not add a subset of point data representing at least a portion of the object to the content of the sharing data for sharing information on the object. However, the present invention is not limited thereto, and the controller may generate shard data including data obtained by processing a part related to a human face in the subset of point data. Also, the first vehicle124may transmit sharing data to the second device (S5007). In this case, the second device may include vehicles125and126, a server400, an infrastructure device700, a mobile device, etc., but the present invention is not limited thereto. For example, referring toFIG.44again, when the second device is a second vehicle125, the first vehicle124may transmit the sharing data3000to the second vehicle125. In this case, the content of the sharing data may include the privacy protection data3300, the first subset of point data2111, etc., but the present invention is not limited thereto. In this case, the content of the sharing data may be determined based on class information of the plurality of subsets of point data2111and2112. In detail, since the class information2211of the first subset of point data representing at least a portion of the third vehicle126is related to a vehicle, the content of the sharing data may include the first subset of point data2111. However, the present invention is not limited thereto. Since the number plate of the vehicle may be related to personal information identification, the content of the sharing data may include privacy protection data obtained by processing at least one piece of point data representing the number plate of the vehicle. Also, since class information of the second subset of point data representing at least a portion of the pedestrian800is related to a human, which is a class in which personal information needs to be protected, the content of the sharing data may include the privacy protection data3300. 4.1.2.1.3. Privacy Protection Data. Also, when a class included in class information of at least one subset of point data included in the set of point data2101is a class in which personal information needs to be protected, the controller may generate sharing data3000including privacy protection data3300(S5005). In this case, the controller may generate the privacy protection data3300in order not to share data related to personal information identification. In other words, the privacy protection data3300may be generated to protect privacy. Also, the privacy protection data3300may not include data related to personal information identification. In detail, since the subset of point data includes intensity information of an object, the subset of point data may be data related to personal information identification. Thus, the privacy protection data3300may not include a personal information identification-related part of the subset of point data. Also, the privacy protection data3300may include property data of the subset of point data. Also, the privacy protection data3300may include data obtained by processing the personal information identification-related part of the subset of point data. FIG.45is a diagram illustrating privacy protection data included in the content of sharing data according to an embodiment. Referring toFIG.45, the privacy protection data3300may include the second property data2202generated based on the second subset of point data2112. For example, the privacy protection data3300may include center position information2221representing the center position of the pedestrian800. In detail, the controller may generate privacy protection data3300including the center position information representing the center coordinates of a plurality of pieces of point data included in the second subset of point data2112. Also, the privacy protection data3300may include size information2231representing the size of the pedestrian800. In detail, the controller may generate privacy protection data3300including the size information2231representing a volume value of the pedestrian800represented by the second subset of point data2112. Also, the privacy protection data3300may include shape information2240represented by processing the shape of the pedestrian800. In detail, the controller may generate privacy protection data3300in which the second subset of point data2112is replaced with predetermined template information2241according to the class information of the second subset of point data2112. Also, the controller may generate privacy protection data3300including skeleton information2242representing the second subset of point data2112as at least one point. However, the present invention is not limited thereto, and the privacy protection data3300may include at least some of a plurality of pieces of information included in the second property data. For example, the privacy protection data3300may include at least some of center position information, size information, movement information, shape information, identification information, and class information of the second subset of point data, but the present invention is not limited thereto. Referring toFIG.45again, the privacy protection data3300may include data3310obtained by processing at least a portion of the second subset of point data2112. For example, the privacy protection data3300may include data obtained by pixelating at least some of the plurality of pieces of point data included in the second subset of point data2112. In detail, the controller may generate privacy protection data3300obtained by pixelating at least one piece of point data related to the face of the pedestrian in the second subset of point data2112representing at least a portion of the pedestrian800. Also, the privacy protection data3300may include data obtained by blurring out at least a portion of the second subset of point data2120. In detail, the controller may generate privacy protection data3300obtained by blurring out at least one piece of point data related to the face of the pedestrian800in the second subset of point data2112representing at least a portion of the pedestrian800. Also, the privacy protection data3300may include data obtained by adding noise data to at least a portion of the second subset of point data2112. In detail, the controller may generate privacy protection data3300obtained by adding the noise data to a part related to the face of the pedestrian800in the second subset of point data2112representing at least a portion of the pedestrian800. Also, the privacy protection data3300may include data obtained by removing at least a portion of the second subset of point data2112. In detail, the controller may generate privacy protection data3300obtained by removing at least some of the plurality of pieces of point data related to the face of the pedestrian800from the second subset of point data2112representing at least a portion of the pedestrian800. Also, the privacy protection data3300may include data obtained by removing a subset of point data representing an object with a class in which personal information needs to be protected. For example, the controller may generate privacy protection data3300obtained by removing the second subset of point data2112representing at least a portion of the pedestrian800. Also, the privacy protection data3300may include data obtained by deleting intensity information of at least a portion of the second subset of point data2112. In detail, the controller may generate privacy protection data3300in which intensity values of a plurality of pieces of point data related to a human face in the second subset of point data2112are set to zero. Also, when the sensor is a camera device, the privacy protection data3300may include data in which a pixel value of the camera device is set to any value. For example, the controller may generate privacy protection data3300in which a pixel value of a part representing the face of the pedestrian800in the second subset of point data2112is adjusted to any value. However, the present invention is not limited thereto, and the privacy protection data3300may include data obtained by processing at least a portion of the second subset of point data2112using a predetermined data processing technique. The predetermined data processing technique can be used by those skilled in the art, and thus a detailed description thereof will be omitted. 4.1.2.2. Selective Sharing According to Other Embodiment A data sharing system according to another embodiment may require approval from a server placed an external institution before transmitting sharing data. For example, the data sharing system may require approval for sharing sensor data itself from an external institution or may require approval for sharing data related to personal information identification included in sensor data from an external institution. In this case, the external institution may include a government institution, a data management institution, etc. However, the present invention is not limited thereto, and the external institution may perform communication through a server. FIG.46is a flowchart illustrating a method of selectively sharing data depending on whether approval for data sharing is gained from an external server in a data sharing system according to an embodiment. Referring toFIG.46, a controller of a first device may acquire a set of point data2100through at least one sensor (S5008). Also, the controller may determine property data of a plurality of subsets of point data included in the set of point data (S5009). Also, the controller may determine class information of an object represented by each of the plurality of subsets of point data (S5010). Also, the controller may determine whether approval for transmitting the plurality of subsets of point data to another device is gained from an external server (S5011). In this case, the external server may determine whether there is a need to share the plurality of subsets of point data despite a privacy invasion issue that may arise by transmitting the plurality of subsets of point data. For example, when at least one of the plurality of subsets of point data represents at least a portion of a criminal involved in at least one crime situation, the external server may approve the sharing of a subset of point data representing at least a portion of the criminal. Also, the controller may request approval for transmitting the sharing data from the external server. In this case, the controller may request the approval while transmitting a subset of point data related to personal information identification to the external server. However, the present invention is not limited thereto, and the controller may request the approval while transmitting property data (e.g., class information) of the subset of point data to the external server. Also, when the approval request is received, the external server may determine whether to approve the transmission of the sharing data. Also, even when there is no approval request from the controller, the external server may determine whether to approve of the controller transmitting the sharing data. Also, once the external server approves the transmission of the sharing data, the approval from the external server is no longer needed to share data related to an object represented by a subset of point data included in the content of sharing data. However, the present invention is not limited thereto, and the controller may gain approval from the external server each time the sharing data is transmitted. Also, when there is approval from the external server, the controller may generate sharing data including the plurality of subsets of point data regardless of the class information of the plurality of subsets of point data (S5013). For example, even when a subset of point data representing at least a portion of a human is included in the plurality of subsets of point data, the controller may generate sharing data including a subset of point data representing at least a portion of the human without generating privacy protection data. Also, when there is no approval from the external server, the controller may determine whether the class of an object included in the class information is a class in which personal information needs to be protected (S5012). Also, when the class information is related to personal information identification, the controller may generate sharing data including privacy protection data (S5013). Also, when the class information is not related to personal information identification, the controller may generate sharing data including no privacy protection data (S5014). Here, the content of the sharing data may include a subset of point data. Also, the controller may transmit the sharing data to a second device (S5015). In this case, the second device may include a vehicle, a server, an infrastructure device, a mobile device, etc., but the present invention is not limited thereto. 4.1.2.3. Whether to Generate Privacy Protection Data According to Position of Sensor Whether to generate privacy protection data according to an embodiment may be determined depending on the position of at least one sensor that acquires sensor data. For example, the at least one sensor may be placed in a vehicle, but the present invention is not limited thereto. As a specific example, for a vehicle including an autonomous driving system according to an embodiment, at least one sensor1300included in the autonomous driving system1000may be placed in the vehicle. In this case, the at least one sensor1300may acquire sensor data including position information and shape and/or color information of an occupant of the vehicle. In this case, a controller of the autonomous driving system may generate privacy protection data regardless of class information of an object included in the sensor data. In detail, when the vehicle is not an unmanned vehicle, it is essential that an occupant gets in the vehicle, and thus the controller may always generate privacy protection data on the basis of the sensor data. Also, the controller may generate privacy protection data according to whether a subset of point data representing at least a portion of a human is included in the sensor data. In this case, the controller may determine whether a subset of point data with a class related to a human is included in the sensor data by determining class information of the subset of point data as described above. Also, the controller may acquire information regarding whether an occupant is in the vehicle from any device placed in the vehicle. For example, the controller may determine whether an occupant is in the vehicle by acquiring vehicle riding information through a weight detection sensor placed in the vehicle. Also, the controller1100of the vehicle may generate sharing data3000for transmitting the sensor data2000to another device through at least one communication module1200. In this case, the content of the sharing data may include privacy protection data3300. In detail, the controller1100may generate privacy protection data3300for personal information protection regardless of the class information of an object included in the sensor data. 4.1.2.4. Whether to Generate Privacy Protection Data According to Distance and Intensity Information At least one sensor included in an autonomous driving system using a data sharing system according to an embodiment may include a LiDAR device. In this case, the LiDAR device may acquire intensity information according to the reflectance and distance information of an object located within a field of view. In this case, a controller included in the autonomous driving system may determine whether to generate privacy protection data according to the distance information and the intensity information. As an example, when an object is spaced a certain distance from the LiDAR device, the controller cannot identify personal information of the object on the basis of sensor data acquired from the LiDAR device. In this case, when a distance between a first device including the controller and a first object included in the sensor data is greater than or equal to a predetermined distance, the controller may not generate privacy protection data regardless of the class of the first object. The predetermined distance may refer to a distance at the personal information of the first object is not identified through the subset of point data regardless of the reflectance of the first object. Also, the controller may preset and store the predetermined distance or set the predetermined distance on the basis of sensor data. As another example, when the reflectance of an object is low, the controller cannot identify personal information of the object through the LiDAR device. In this case, when an intensity value of a second object is less than or equal to a threshold, the controller may not generate privacy protection data regardless of the class of the second object. In this case, the threshold may refer to an intensity value in which the personal information of the second object is not identified through the subset of point data regardless of distance information of the second object. Also, the controller may preset and store the threshold and set the threshold on the basis of sensor data. Also, the controller may generate sharing data including at least one of a plurality of subsets of point data representing at least a portion of the first object or the second object and property data of the plurality of subsets of point data. 4.1.2.5. Selective Storing of Sensor Data for Privacy Protection The embodiments of selectively storing sensor data to protect privacy may be applied to a case of selectively storing the sensor data. For example, when the class of an object included in the class information of the subset of point data is a class in which personal information needs to be protected, a device that acquires the subset of point data may not store the subset of point data. In this case, the device may generate and store privacy protection data obtained by processing at least a portion of the sub set of point data. However, the present invention is not limited thereto, and the device may always store the subset of point data regardless of the class information of the subset of point data. 4.1.3. Selective Sharing of Sharing Data to Generate High-Definition Map 4.1.3.1. Selective Sharing Method According to Embodiment A data sharing system according to an embodiment may include a first device and a second device, each of which includes at least one communication module for performing communication. In this case, the first device and the second device may include a vehicle, a server, an infrastructure device, a mobile device, or the like, but the present invention is not limited thereto. FIG.47is a flowchart illustrating a detailed method of selectively sharing sensor data according to another embodiment. Referring toFIG.47, a controller of a first device may obtain a set of point data through at least one sensor (S5017). Also, the controller may determine class information of a subset of point data included in the set of point data (S5018). Also, the controller may determine whether an object represented by the subset of point data is movable on the basis of the class information (S5019). Also, when the object cannot move, the controller may generate sharing data including the subset of point data (S5020). Also, the controller may transmit the sharing data to a second device (S5021). Hereinafter, each operation will be described in detail. 4.1.3.1.1. Acquisition of Sensor Data Referring toFIG.47again, a controller of a first device may obtain a set of point data through at least one sensor (S5017). Also, the controller may determine class information of a plurality of subsets of point data included in the set of point data (S5018). In this case, the first device may include a vehicle, an infrastructure device, etc., but the present invention is not limited thereto. FIG.48is a diagram showing a situation in which a first vehicle acquires sensor data to selectively share the sensor data according to an embodiment. FIG.49is a diagram schematically representing sensor data acquired by the first vehicle through a LiDAR device according toFIG.48in a 2D plane. Referring toFIGS.48and49, a controller of a first vehicle127may obtain a set of point data2102including a plurality of subsets of point data2113,2114, and2115through at least one sensor. For example, the controller may extract a first subset of point data2113representing at least a portion of a pedestrian800, a second subset of point data2114representing at least a portion of a third vehicle129, and a third subset of point data2115representing at least a portion of a building500in the set of point data. Also, the controller may determine class information of the plurality of subset of point data2113,2114, and2115. For example, the controller may determine that the class information of the first subset of point data2113is “human.” However, the present invention is not limited thereto, and the controller may determine that the class information is a sub class of “human.” Also, the controller may determine that the class information of the second subset of point data2114is “vehicle.” However, the present invention is not limited thereto, and the controller may determine that the class information is a sub class of “vehicle.” Also, the controller may determine that the class information of the third subset of point data2115is “building.” However, the present invention is not limited thereto, and the controller may determine the class information as a sub class of “building.” 4.1.3.1.2. Criterion for Selecting Sharing Data Also, the controller may determine whether an object represented by the subset of point data is movable on the basis of the class information (S5019). In detail, in order to selectively share sensor data according to an embodiment, the controller may determine the movability of objects represented by the plurality of subsets of point data2113,2114, and2115. In this case, whether the objects are movable may be determined based on class information of the objects. More specifically, referring toFIG.49, the controller may determine that a pedestrian800and a third vehicle129are movable objects on the basis of class information of the first subset of point data2113representing at least a portion of the pedestrian800and the second subset of point data2114representing at least a portion of the vehicle129. Also, the controller may determine that a building500is an immovable object on the basis of class information of the third subset of point data2115representing at least a portion of the building500. As an example, the controller may determine the movability of an object on the basis of whether class information of a subset of point data representing the object is related to an immovable object or is related to a movable object. For example, when the controller determines that the class information of the third subset of point data2115is “building,” the class information is related to an immovable object. Thus, the controller may determine that the building500represented by the third subset of point data2115is immovable. As another example, the controller may pre-classify class information into a movable object and an immovable object and may determine that class information of a subset of point data representing the object is a movable object or an immovable object. For example, the controller may determine that the class information of the third subset of point data2115is “immovable object.” In this case, the controller may determine that the building500represented by the third subset of point data2115is immovable. Also, the controller may determine the content of sharing data according to a class type of an object on the basis of class information of a subset of point data without determining the movability of the object on the basis of the class information of the subset of point data. In detail, the controller may determine the content of the sharing data according to a predetermined criterion on the basis of the class type of the object included in the class information of the subset of point data. That is, a predetermined criterion for determining the content of the sharing data may be predetermined for each class type of the object. As an example, the content of the sharing data may not include the first subset of point data when the class type of the object included in the class information of the first subset of point data is “human” or “vehicle” and may include the second subset of point data when the class type of the object included in the class information of the second subset of point data is a class other than “human” or “vehicle.” As another example, the content of the sharing data may include the first subset of point data when the class type of the object included in the class information of the first subset of point data is an immovable object such as “building” and may not include the second subset of point data when the class type of the object included in the class information of the second subset of point data is a class other than an immovable object such as “building.” It will be appreciated that the predetermined criterion for the class type may vary depending on the embodiment. For example, the content of the sharing data may be determined according to a criterion contrary to the above-described predetermined criterion, but the present invention is not limited thereto. Also, a user may set the predetermined criterion while designing the data sharing system according to an embodiment and may also use the predetermined criterion while using the data sharing system. 4.1.3.1.3. Generation and Transmission of Sharing Data Also, when the object is immovable, the controller may generate sharing data including the subset of point data (S5020). In detail, in order to selectively share sensor data according to an embodiment, the controller may generate sharing data on the basis of the movability of a plurality of objects represented by the plurality of subsets of point data2113,2114, and2115. As an example, when class information of a subset of point data is related to an immovable object, the controller may generate sharing data including at least a portion of the subset of point data or the property data of the subset of point data. FIG.50is a diagram illustrating the content of sharing data according to an embodiment. Referring toFIG.50, the content of the sharing data3000may include a third subset of point data2115having class information related to an immovable object. In this case, since the third subset of point data2115represents at least a portion of the building500, which is an immovable object, the controller may generate the sharing data3000including the third subset of point data2115. However, the present invention is not limited thereto, and when the class information of the third subset of point data2115is related to an immovable object, the controller may generate sharing data3000including third property data2205of the third subset of point data. In this case, the third property data2205may include at least some of class information, center position information, size information, shape information, movement information, or identification information which is acquired based on the third subset of point data2115. However, the present invention is not limited thereto, and even when class information of a subset of point data is related to a movable object, the controller may generate sharing data including property data of the subset of point data. In detail, when the class information of the subset of point data is related to a movable object, the controller may generate sharing data including center position information of the subset of point data. For example, the content of the sharing data may further include first and second property data2203and2204of the first and second subsets of point data2113and2114having class information related to the movable object. In this case, the first and second property data2203and2204may include center position information acquired based on the first and second subsets of point data2113and2114, but the present invention is not limited thereto. Also, the first vehicle127may transmit the sharing data to a second device. In this case, the second device may include vehicles128and129, an infrastructure device700, a server400, a mobile device, etc., but the present invention is not limited thereto. For example, when the second device is a server400, the first vehicle127may transmit the sharing data3000to the server400. In this case, the server400may generate a high-definition map on the basis of the sharing data. 4.1.3.2. Sharing Data Including Additional Information Also, the content of the sharing data may include additional information related to a stop time of a stationary object. In detail, when class information of an object included in the sensor data is related to a stationary object and additional information related to a stop time of the object is included in the sensor data, the controller may generate sharing data including the additional information. FIG.51is a flowchart illustrating a method of selectively sharing sensor data including additional information according to an embodiment. Referring toFIG.51, a controller of a first device may obtain a set of point data through at least one sensor and determine class information of a plurality of subsets of point data included in the set of point data (S5022). In this case, the first device may include a vehicle, an infrastructure device, etc., but the present invention is not limited thereto. Also, the class information may be related to a stationary object or may be related to a movable object. Also, the controller may determine the movability of a plurality of objects represented by the plurality of subsets of point data on the basis of the class information (S5023). For example, when an object is related to a stationary object, the controller may determine that the object is immovable. Also, for an object determined to be immovable, the controller may obtain additional information related to movability (S5024). In this case, the additional information may include a stop time of the stationary object. Also, when the additional information is acquired, the controller may generate sharing data including the additional information and the subset of point data (S5025). Also, when the additional information is not acquired, the controller may generate sharing data including the subset of point data (S5026). Also, the controller may transmit the sharing data to a second device (S5027). In this case, the second device may include a vehicle, an infrastructure device, a server, etc., but the present invention is not limited thereto. FIG.52is a diagram showing a situation in which a first vehicle acquires additional information through at least one sensor according to an embodiment. FIG.53is a diagram schematically showing, in a 2D plane, the sensor data acquired by the first vehicle according toFIG.52. Referring toFIGS.52and53, a first vehicle130may acquire a set of point data2103including a plurality of subsets of point data2116,2117, and2118through at least one sensor. In this case, the plurality of subsets of point data2116,2117, and2118may include a first subset of point data2116representing at least a portion of a construction sign900, a second subset of point data2117representing at least a portion of a third vehicle132, and a third subset of point data2118representing at least a portion of a building500, but the present invention is not limited thereto. Also, a controller of the first vehicle may determine class information of the plurality of subsets of point data. For example, the controller may determine that the class information of the first subset of point data2116is “sign,” determine that the class information of the second subset of point data2117is “vehicle,” and determine that the class information of the third subset of point data2118is “building.” Also, the controller may determine whether class information of a plurality of objects is related to an immovable object to determine the movability of the plurality of objects. For example, since the class information of the first subset of point data2116and the third subset of point data2118is related to an immovable object, the controller may determine that the construction sign900and the building are immovable. Also, the controller may generate sharing data including a subset of point data representing an object that cannot move. In detail, when additional information related to a stop time of an object is included in the subset of point data representing the immovable object, the controller may generate sharing data further including the additional information. For example, the controller may add additional information related to the stop time of the construction sign (e.g., information regarding a construction period) to the first subset of point data2116. In this case, the additional information may be acquired based on intensity information of the construction sign900acquired from at least one LiDAR device. In detail, the controller may recognize additional information representing a construction completion time shown in the construction sign900on the basis of an intensity value included in the first subset of point data2116representing at least a portion of the construction sign900acquired from a LiDAR device. Also, when the controller recognizes the additional information, the controller may generate sharing data including the first subset of point data2116and the additional information. Also, the additional information may be acquired from the outside. For example, the controller may acquire additional information related to the stop time of the construction sign900from an external server and may generate sharing data including the additional information. Also, the controller may transmit the sharing data to a second device. In this case, the second device may include vehicles131and132, a server400, an infrastructure device700, etc., but the present invention is not limited thereto. FIG.54is a diagram illustrating a subset of point data and additional information included in the content of sharing data according to an embodiment. Referring toFIG.54, the first vehicle130may transmit sharing data3000to the server400. In this case, the content of the sharing data may include the first subset of point data2116and the third subset of point data2118which are related to stationary objects. Also, the content of the sharing data may include additional information2300representing a stop time of a construction sign900represented by the first subset of point data2110. Also, although a controller of the first vehicle130does not acquire additional information from the first subset of point data2116, the controller may acquire additional information related to a stop time point of a construction site near the construction sign900when the controller acquires sensor data related to the construction site. In detail, when class information of a plurality of subsets of point data representing a worker and an excavator included in the construction site is determined as “construction site,” the controller may acquire additional information including a construction completion time point of the construction site. In this case, the construction completion time point may refer to stop time points of a plurality of objects related to the construction site. Thus, the controller may generate sharing data including the additional information and transmit the generated sharing data to a second device. 4.1.3.3. Selective Sharing of Sensor Data According to Other Embodiment Information regarding an immovable object may be prestored in a device for generating a high-definition map. In this case, the device for transmitting the sensor data may select only data related to movable objects from the sensor data and transmit the data to the device for generating the high-definition map. FIG.55is a flowchart illustrating a method of sharing sensor data related to a movable object according to an embodiment. Referring toFIG.55, a controller included in a first device may obtain a set of point data through at least one sensor (S5028). In this case, the first device may include a vehicle, a server, an infrastructure device, a mobile device, etc., but the present invention is not limited thereto. Also, the controller may determine class information of a plurality of subsets of point data included in the set of point data (S5029). Also, the controller may determine the movability of a plurality of objects represented by the plurality of subsets of point data on the basis of the class information (S5030). Also, when the controller determines that a first object may move because class information of the first object is related to a movable object, the controller may generate sharing data including a subset of point data representing at least a portion of the first object (S5031). In this case, the content of the sharing data may include property data of the subset of point data and may further include property data of a subset of point data representing at least a portion of a second object related to an immovable object. Also, the controller may transmit the sharing data to a second device (S5032). In this case, the second device may include a vehicle, a server, an infrastructure device, a mobile device, etc., but the present invention is not limited thereto. 4.1.3.4. Selective Sharing of Sensor Data According to Still Other Embodiment Also, a controller of a second device, which receives sharing data from a first device, may determine whether to store the sharing data according to class information of a subset of point data included in the sharing data. FIG.56is a diagram illustrating a method of selectively storing sharing data according to an embodiment. Referring toFIG.56, a first device may acquire a set of point data through at least one sensor (S5033). Also, a controller included in the first device may transmit sharing data including the set of point data to a second device through at least one communication module. In this case, the content of the sharing data may further include additional information for the second device to facilitate coordinate system alignment. For example, the additional information may include sampling rate-related information, resolution information, etc. of a sensor of the first device, but the present invention is not limited thereto. Also, when the sharing data is received, a controller of the second device may determine class information of a plurality of subsets of point data included in the set of point data (S5035). Also, the controller of the second device may determine whether to store data included in the sharing data on the basis of the class information (S5036). As an example, when the class of an object included in the class information is a class in which personal information needs to be protected, the controller of the second device may generate and store privacy protection data obtained by processing at least a portion of the set of point data. In this case, the controller of the second device may delete rather than store a subset of point data representing the object having the class in which personal information needs to be protected. As another example, the controller of the second device may determine the movability of an object on the basis of class information and may store sensor data representing an object that cannot move. In detail, the controller of the second device may store a subset of point data having class information related to an immovable object or property data of this subset of point data among the plurality of subsets of point data included in the set of point data. As another example, the controller of the second device may determine the movability of an object on the basis of class information and may store sensor data representing an object that may move. In detail, the controller of the second device may store a subset of point data having class information related to a movable object or property data of this subset of point data among the plurality of subsets of point data included in the set of point data. Also, the second device may determine whether to store data included in the sharing data according to whether information regarding an object represented by a subset of point data included in the content of the received sharing data is stored in the second device. Also, the second device may receive the sharing data and generate a high-definition map. In this case, when information related to immovable objects is stored in the high-definition map of the second device, the second device may receive sensor data related to the movable object and update the high-definition map. However, the present invention is not limited thereto. In order to update the high-definition map with the information related to immovable objects, the second device may receive the sensor data related to immovable objects. In this case, the sensor data may include a set of point data, a plurality of subsets of point data, and property data of the plurality of subsets of point data, but the present invention is not limited thereto. Also, the second device may receive sharing data including privacy protection data and match the privacy protection data to the high-definition map. 4.2. Selective Sharing of Sensor Data According to Occurrence of Event 4.2.1. Necessity of Selective Sharing According to Occurrence of Event A data sharing system according to an embodiment may include a first device and a second device as data sharing entities. Here, the first device may transmit sharing data to the second device or a server, but the present invention is not limited thereto. In this case, when the first device shares all acquired sensor data with the second device or the server, various problems such as poor data sharing efficiency may occur. For example, when a set of point data included in the sensor data is shared without any processing, a data storage capacity problem, a communication server overload problem, or the like may occur, but the present invention is not limited thereto. In order to solve the above problems, a controller of the first device may generate the content of the sharing data at least partially differently depending on whether an event has occurred. For example, the controller may generate and transmit first sharing data including property data before the event occurs. In this case, the event may include a traffic event related to vehicle driving, an environmental event such as rain and snow, and a regulatory event such as entry into a child protection zone, but the present invention is not limited thereto. The event will be described in detail below. Also, the controller may generate and transmit second sharing data including a set of point data or a plurality of subsets of point data in order to transmit accurate information related to the event after the event occurs. In this case, the second sharing data may include a set of point data or a plurality of subsets of point data which have been acquired for a predetermined time before and after the event occurs. 4.2.2. Selective Sharing Method (1) of Sensor Data According to Embodiment. FIG.57is a flowchart illustrating a selective sharing method for sensor data according to another embodiment. Referring toFIG.57, a controller of a first device may acquire a set of point data through at least one sensor (S5037). In this case, the first device may include a vehicle, an infrastructure, etc., but the present invention is not limited thereto. Also, the controller may determine property data of a plurality of subsets of point data included in the set of point data (S5038). In this case, the property data may include class information, center position information, size information, movement information, shape information, identification information, and the like of the subsets of point data, but the present invention is not limited thereto. Also, the controller may generate first sharing data including the property data and transmit the first sharing data to a second device (S5039, S5040). In this case, the second device may include a vehicle, a server, an infrastructure device, a mobile device, etc., but the present invention is not limited thereto. Also, the controller may determine the occurrence of an event (S5041). In this case, the event may include a traffic event related to driving and accident of vehicle, but the present invention is not limited thereto. Also, the controller may generate and transmit second sharing data including a plurality of sets of point data acquired for a first time period before and after the event occurs (S5042). Hereinafter, a method of determining the occurrence of an event according to an embodiment will be described in detail. 4.2.2.1. Method of Generating Sharing Data and Determining Occurrence of Event FIG.58is a diagram showing a situation in which a first vehicle acquires sensor data before an event occurs according to an embodiment. FIG.59is a diagram schematically showing a set of point data included in the sensor data acquired according toFIG.58in a 2D plane. Referring toFIGS.58and59, a first vehicle133may include a set of point data2104including a first subset of point data2119representing at least a portion of a second vehicle134and a second subset of point data2120representing at least a portion of a third vehicle135. Also, the controller may determine a plurality of pieces of property data of a plurality of subsets of point data included in the set of point data. In this case, the plurality of pieces of property data may include at least one of center position information, size information, class information, shape information, movement information, or identification information of the plurality of subsets of point data2119and2120, but the present invention is not limited thereto. Also, the first device may generate first sharing data and transmit the generated first sharing data to the second device (S5039, S5040). FIG.60is a diagram illustrating first sharing data transmitted by a first vehicle before an event occurs according to an embodiment. Referring toFIG.60, a controller of the first vehicle may generate first sharing data3000aincluding first property data2206of the first subset of point data and second property data2207of the second subset of point data and transmit the first sharing data3000ato the second vehicle134. Also, the controller may determine the occurrence of an event. For example, the controller may determine a traffic event6100between the second vehicle134and the third vehicle135. In this case, the traffic event6100may be related to at least one of an accident situation related to the first vehicle133or accident situation related to other vehicles134and135near the first vehicle. FIG.61is a diagram showing a situation in which a first vehicle acquires sensor data when an event occurs according to an embodiment. FIG.62is a diagram schematically showing a set of point data included in the sensor data acquired according toFIG.61in a 2D plane. Referring toFIGS.61and62, a controller of the first vehicle133may acquire a second set of point data2105including the vehicles134and135related to the traffic event6100through at least one sensor. In this case, the set of point data2105may include a third subset of point data2121representing at least a portion of the second vehicle134and a fourth subset of point data2122representing at least a portion of the third vehicle135. As an example, the controller may determine the occurrence of the event on the basis of at least a portion of a set of point data or property data of the subset of point data (S5041). In detail, the controller may determine the occurrence of the event on the basis of at least a portion of a plurality of pieces of information included in a plurality of pieces of property data or location information of objects included in a plurality of subsets of point data. As a specific example, the controller of the first vehicle133may determine that the traffic event6100has occurred between the second vehicle134and the third vehicle135when point data included in the third subset of point data2121representing at least a portion of the second vehicle134at least partially overlaps point data included in the fourth subset of point data2122representing at least a portion of the third vehicle13and also when a distance between the third subset of point data2121and the fourth subset of point data2122is determined to be less than or equal to a predetermined distance on the basis of distance information determined through the controller. In this case, at least one of a plurality of subsets of point data2121and2122included in a set of point data the first vehicle acquires may represent at least a portion of the vehicles134and135related to the event. Also, when the plurality of subsets of point data or a plurality of pieces of property data partially overlap each other in a 3D point data map generated based on the set of point data, the controller may determine that the traffic event6100has occurred between the second vehicle134and the third vehicle135. However, the present invention is not limited thereto, and the controller may determine the occurrence of an event even when a subset of point data representing an event related to the event is not included in the set of point data. As an example, when information for determining the occurrence of the event is included in the set of point data, the controller may determine the occurrence of the event on the basis of the information for determining the occurrence of the event. As a specific example, when a subset of point data representing an object for indicating an accident site is included in the set of point data, the controller may determine the occurrence of an event on the basis of the subset of point data representing the object for indicating the accident site. However, the present invention is not limited thereto, and the controller may determine the occurrence of an event by acquiring information including the occurrence of the event from the second device or the third device. In this case, the third device may include vehicles134and135, a server400, an infrastructure device700, etc., but the present invention is not limited thereto. For example, when the server400determines the occurrence of the event, the server400may transmit the information including the occurrence of the event to a device near where the event has occurred. As a specific example, when the server400determines that the traffic event6100has occurred, the server400may transmit information including the occurrence of the traffic event6100to the first vehicle133which is located near where the traffic event6100has occurred. In this case, when the information including the occurrence of the traffic event is received, the first vehicle133may determine that the traffic event6100has occurred. However, the present invention is not limited thereto, and the controller may determine the occurrence of an event by acquiring data request information from at least one of the second device or the third device. In this case, the request information may include information indicating the occurrence of the event. For example, when the server400transmits request information for requesting data related to the traffic event6100to the first vehicle133, the request information includes the information indicating the occurrence of the traffic event6100, and thus the first vehicle133may determine that the traffic event6100has occurred when the request information is received. Also, the controller may generate second sharing data3000bincluding the second set of point data3100(S5042). FIG.63is a diagram illustrating second sharing data transmitted by a first vehicle after an event occurs according to an embodiment. Referring toFIG.63, a controller of the first vehicle may generate and transmit second sharing data3000bincluding the second set of point data to the second vehicle134. In this case, the second set of point data may include a third subset of point data2121representing at least a portion of the second vehicle134and a fourth subset of point data2122representing at least a portion of the third vehicle134. In this case, the content of the second sharing data may be at least partially different from the content of the first sharing data. As an example, when the sharing data3000aand300bare received, the second device needs more accurate data related to the traffic event6100, and thus the second sharing data3000bmay include a plurality of subsets of point data2121and2122acquired after the traffic event6100occurs. As another example, the resolution of a sensor for acquiring sensor data included in the content of the second sharing data may be different from the resolution of a sensor for acquiring sensor data included in the content of the first sharing data. For example, the resolution of the sensor for acquiring sensor data included in the content of the second sharing data may be higher than the resolution of the sensor for acquiring sensor data included in the content of the first sharing data, but the present invention is not limited thereto. Also, the content of the second sharing data may include a plurality of sets of point data acquired for a first time period before and after the traffic event6100occurs. In detail, the plurality of sets of point data may include a set of point data acquired before the traffic event6100occurs as well as a set of point data acquired from a sensor of the first vehicle133after the traffic event6100occurs. This may be to obtain accurate information related to the cause of the traffic event6100through the data acquired before and after the traffic event6100occurs. However, the present invention is not limited thereto, and the content of the second sharing data may further include property data related to the event. It will be appreciated that the selective sharing method for sensor data according to an embodiment is not limited to the operations shown inFIG.57. For example, the first device may not generate sharing data before the first device determines that an event has occurred. Thus, the first device may not share data with a second device before the first device determines that an event has occurred. 4.2.2.2. Data Sharing Entity Also, the first device may transmit the second sharing data (S5042). In this case, an entity receiving the second sharing data may include a vehicle, an infrastructure device, a mobile device, etc., but the present invention is not limited thereto. Also, the first device may transmit the second sharing data to a second device which has transmitted the first sharing data. Referring toFIG.61again, the controller of the first vehicle133may transmit the second sharing data to the second vehicle134related to the traffic event6100. However, the present invention is not limited thereto, and when request information for requesting data related to the traffic event600is acquired from the server400, the controller may transmit the second sharing data3000bto the server400. It will be appreciated that when information related to a sharing data receiving entity is included in the request information, the controller may transmit the second sharing data on the basis of the information related to the sharing data receiving entity. For example, when information regarding a sharing data receiving entity and instructing to transmit sharing data to the third vehicle135is included in the request information received from the server400, the controller may transmit the second sharing data3000bto the third vehicle135. 4.2.2.3. Generation Time of Sharing Data When the controller of the first device determines the occurrence of an event, the controller may generate sharing data at certain intervals after the event occurs. In this case, the content of the sharing data may include at least one set of point data acquired before the event occurs. However, the present invention is not limited thereto, and the content of the sharing data may include at least one set of point data acquired after the event occurs. In this case, the controller may transmit the sharing data to the second device each time the sharing data is generated. Also, the controller may generate the sharing data after the completion of a first time period including a time point at which the event occurs. In this case, the content of the sharing data may include a plurality of sets of point data acquired for a first time period before and after the event occurs. In this case, the controller may transmit the sharing data to the second device after the sharing data is generated. For example, referring toFIG.63again, the first vehicle133may transmit second sharing data3000bto the second vehicle134. In this case, the second sharing data3000bmay be generated at regular intervals after the traffic event6100occurs. In this case, the content of the second sharing data may include a set of point data or a plurality of subsets of point data2121and2122which are acquired when the traffic event occurs. Also, the content of the second sharing data may include a plurality of sets of point data acquired before the traffic event occurs and may include a plurality of sets of point data acquired after the traffic event occurs. However, the present invention is not limited thereto, and the second sharing data3000bmay be generated after the completion of the first time period before and after the traffic event6100occurs. In this case, the content of the second sharing data may include a plurality of sets of point data acquired for a first time period including a predetermined time before and after the traffic event. However, the present invention is not limited thereto, and the content of the second sharing data may include a set of point data and a plurality of subsets of point data2121and2122which are acquired when the traffic event occurs. However, the present invention is not limited thereto, and the sharing data may be generated at the same time as the sensor data is acquired. It will be appreciated that the sharing data may be generated at any time regardless of when the sensor data is acquired. 4.2.2.4. Various Examples of Event. The event6000may refer to all situational conditions related to the inside and outside of the first device. For example, the event may include a traffic event, an environmental event, a regulatory event, a blind spot discovery, a user input reception, etc., but the present invention is not limited thereto. For example, the event may be a traffic event related to at least one of an accident situation related to the first device or an accident situation related to another device near the first device, an environmental event related to the surrounding environment of the first device, a regulatory event related to regulations on the first device or another device near the first device, etc., but the present invention is not limited thereto. Also, it will be appreciated that the above-described embodiments of the selective sharing method for sensor data are applicable to various types of events. In this case, the traffic event may be related to at least one of an accident situation related to the first vehicle or accident situations related to other vehicles near the first vehicle. For example, the traffic event may include a vehicle accident, an accident between a vehicle and a pedestrian, a traffic jam, etc., but the present invention is not limited thereto. FIG.64is a diagram illustrating a situation in which a traffic event has occurred according to an embodiment. Referring toFIG.64, a chain collision accident may be included in the traffic event6100. In this case, the content of sharing data that is shared between data sharing entities may vary before and after the traffic event6100occurs. For example, before the traffic event6100occurs, sharing data including property data of a subset of point data may be shared, but after the traffic event6100occurs, sharing data including at least one of a set of point data or a subset of point data may be shared. Also, the environmental event may be related to the surrounding environment of the first device. For example, the environmental event may include occurrence of bad weather, deterioration of road conditions, sudden rain or snow, occurrence of fog or sea fog, etc., but the present invention is not limited thereto. FIG.65is a diagram illustrating a situation in which an environmental event has occurred according to an embodiment. Referring toFIG.65, rain that suddenly falls in an area where a vehicle is traveling may be included in an environmental event6200. In this case, the content of sharing data that is shared between data sharing entities may vary before and after the environmental event6200occurs. For example, before the environmental event6200occurs, sharing data including property data of a subset of point data may be shared, but after the environmental event6200occurs, sharing data including at least one of a set of point data or a subset of point data may be shared. For example, when it suddenly rains while a vehicle is traveling, it may be difficult for at least one sensor placed in the vehicle to acquire accurate sensor data for a plurality of objects located near the vehicle. Accordingly, in order to share more accurate sensor data, the vehicle and other devices may generate sharing data including at least a portion of the set of point data or the subset of point data and share the generated sharing data. As another example, the regulatory event may be related to regulations on the first device or other devices near the first device. For example, the regulatory event may include entry into a child protection zone, entry into a speed enforcement zone, approval for data sharing by an external server, entry into an available communication zone, etc., but the present invention is not limited thereto. FIG.66is a diagram illustrating a situation in which a regulatory event has occurred according to an embodiment. Referring toFIG.66, a situation in which a traveling vehicle enters a child protection zone may be included in a regulatory event6300. In this case, the content of sharing data shared between data sharing entities may vary before and after the regulatory event6300occurs. For example, before the regulatory event6300occurs, sharing data including property data of a subset of point data may be shared, but after the regulatory event6300occurs, sharing data including at least one of a set of point data or a subset of point data may be shared. For example, when a vehicle enters a child protection zone, it may be difficult for the vehicle to avoid a collision with a pedestrian who suddenly runs onto a road. Accordingly, in order to share accurate information on at least one object included in sensor data acquired from the vehicle and other vehicles, the vehicle and the other devices may generate sharing data including a subset of point data or a set of point data representing the at least one object and then share the generated sharing data. Also, in order to acquire information on an object not located in the field of view of at least one sensor placed in the vehicle or an object not included in sensor data acquired from the at least one sensor, the vehicle may receive sensor data from at least one infrastructure device located in a child protection zone after the vehicle enters the child protection zone. As another example, the event may include a sensor failure event. In detail, when at least one sensor included in an autonomous driving vehicle fails while the vehicle is traveling, the content of sharing data which is shared between the autonomous driving vehicle and other devices may vary before and after the sensor fails. 4.2.3. Selective Sharing Method (2) of Sensor Data According to Embodiment FIG.67is a diagram illustrating a method of requesting, by a server, data regarding an event or indicating that an event has occurred according to an embodiment. Referring toFIG.67, the server may recognize an event that has occurred in a first region at a first time (S5043). Details on how the server checks an event (it is noted that a traffic event is changed to an event) have been described in Section 4.2.2.1, and thus will be omitted here. Also, the first time may refer to a representative time related to the occurrence of the event. For example, the first time may refer to a time at which the event actually occurs. However, the present invention is not limited thereto, and the first time may refer to a time at which the server recognizes the event. Also, the first region may refer to a representative region related to the occurrence of the event. For example, the first region may refer to a region including all objects related to the event. However, the present invention is not limited thereto, and when the event is a fender-bender, the first region may refer to a point where a minor collision between occurs or a predetermined region including the point where the minor collision occurs. Also, the server may transmit a first message for requesting sensor data related to the event to a first device (S5044). Also, the server may transmit a second message indicating that the event has occurred to a second device (S5045). Also, the server may receive sensor data related to the event from the first device (S5046). The operation of transmitting a message and receiving sharing data among the above-described operations included in the server operation method will be described below. 4.2.3.1. Message Transmission Range When an event is recognized, a server may request data from a first device located near a first area where the event has occurred. In this case, the server may request sensor data from the first device or may request various types of data other than the sensor data. FIG.68is a diagram showing a situation in which a server and a vehicle communicate with each other to share data according to an embodiment. Referring toFIG.68, a first vehicle136may be located in a first range from a region where the event6100has occurred and may acquire sensor data related to a traffic event6100through at least one sensor. Also, when the first vehicle136is located in a first range7100from a first region where the traffic event6100has occurred, a server400which has recognized the traffic event6100may transmit a first message requesting sensor data to the first vehicle136. In this case, the first range7100may correspond to a region included in the inside of a predetermined shape based on the first region. For example, the first range may be a region included in the inside of an irregular shape, a circle, a polygonal shape, or the like, but the present invention is not limited thereto. Also, the first range7100may be determined based on sensor data. In detail, when an object related to the traffic event is included in sensor data acquired by the first device located in the first range, the first range may be set such that the first device is located in the first range from the first region. Also, the first range7100may include a first sub-range and a second sub-range. FIG.69is a diagram illustrating a first sub-range included in a first range according to an embodiment. Referring toFIG.69, the first range7100may include the inside of a sphere with respect to the region where the traffic event6100has occurred. Also, a fourth vehicle139may be located in the first sub-range7110included in the first range7100. In this case, the first sub-range7110may correspond to a region in which information related to the traffic event6100can be acquired in the first range7100. In detail, when the fourth vehicle139is located in the first sub-range7110, the fourth vehicle139may acquire data regarding the traffic event6100through at least one sensor. Also, the first sub-range7110may be determined based on sensor data. In detail, when an object related to the traffic event is included in sensor data acquired by the fourth vehicle139located in the first range7100, the first sub-range7110may be set such that the fourth vehicle139is located in the first sub-range7110from the first region. In this case, the sensor data acquired by the fourth vehicle139may include a subset of point data representing at least a portion of the object related to the traffic event6100. Also, the third vehicle138may be located in a second sub-range7120included in the first range7100. In this case, the second sub-range7120may correspond to a region in which information related to the traffic event6100cannot be acquired in the first range7100. In detail, when the third vehicle138is located in the second sub-range7120, the third vehicle138may acquire data regarding the traffic event6100through at least one sensor. Also, the second sub-range7120may be determined based on sensor data. In detail, when an object related to the traffic event is not included in sensor data acquired by the third vehicle138located in the first range7100or when the sensor data and the object related to the traffic event have a low correlation, the second sub-range7120may be set such that the third vehicle183is located in the second sub-range from the first region. In this case, the sensor data acquired by the third vehicle138may not include a subset of point data representing at least a portion of the object related to the traffic event6100. Also, the server may notify the second device located near the region where the traffic event has occurred of the occurrence of the event. Referring toFIG.68again, the second vehicle137may be located in a second range7200from the first region where the traffic event6100has occurred. Also, when the second vehicle137is located in the second range7200, which represents a predetermined region outside the first range7100, from the first region where the traffic event6100has occurred, the server may transmit a second message indicating that the traffic event has occurred to the second vehicle137. In this case, the second range7200may correspond to a region included in the inside of a predetermined shape with respect to the first region. For example, the second range may be a region included in the inside of an irregular shape, a circle, a polygonal shape, or the like in the region outside the first range7100, but the present invention is not limited thereto. Referring toFIG.68again, a path of the second vehicle137may be related to the first region where the traffic event6100has occurred. In detail, when the path of the second vehicle137located in the second range7200from the first region is related to the first region related to the traffic event6100, the server400may transmit a second message indicating that the traffic event has occurred to the second vehicle137. Also, the second range7200may be determined based on the path of the second vehicle137. In detail, when the path of the second vehicle137is related to the first region where the traffic event6100has occurred, the server400may determine the second range7200such that the second vehicle137is located in the second range7200. Also, the second range may include the first range. In this case, the server may transmit the first message and the second message to a vehicle located in the first range. 4.2.3.2 Reception of Sharing Data Also, the server may receive sensor data from the first device in response to the first message. In this case, the sensor data may include a set of point data, a subset of point data, property data of the subset of point data, etc., but the present invention is not limited thereto. FIG.70is a diagram illustrating data included in the sharing data transmitted by a first vehicle to a server according to an embodiment. Referring toFIG.70, the first vehicle136included in the first range may transmit sharing data3000to the server400in response to the first message. In this case, the content of the sharing data may include a first set of point data2106acquired at a first time point at which the traffic event6100occurs. Also, the content of the sharing data may include a plurality of sets of point data acquired for a first time period including a first time at which the traffic event occurs in order to share information regarding before and after the occurrence of the traffic event6100. In this case, the plurality of sets of point data may include the first set of point data2106. Also, details on when the sharing data is generated have been described in Section 4.2.2.3, and thus will be omitted here. Also, a server400which has received the sharing data may reconfigure the traffic event on the basis of a plurality of sets of point data included in the content of the sharing data. In detail, the server400may reconfigure the traffic event by listing, in chronological order, a plurality of sets of point data related to the traffic event acquired for the first time period. Also, the server400may reconfigure the traffic event by re-sampling a plurality of sets of point data related to the traffic event acquired for the first time period. The scheme of reconfiguring the traffic event can be used by those skilled in the art, and thus a detailed description thereof will be omitted here. Also, the reconfigured traffic event may be transmitted to at least one vehicle and displayed to an occupant through an infotainment system of the at least one vehicle. However, the present invention is not limited thereto, and the reconfigured traffic event may be transmitted to an external institution. 4.2.3.3. Information Included in Message FIG.71is a diagram illustrating information included in a first message according to an embodiment. Referring toFIG.71, a first message1431received from a server through a message window1430included in at least one infotainment system of a vehicle may be displayed. Also, the first message1431may include time information related to the occurrence time of the event. In this case, the time information may include first information representing that the event has occurred at a first time. Also, a controller of the vehicle may recognize that the event has occurred at at least one of a time point at which the first message1431is received, a time point at which the time information is acquired, or a time point at which the first information is acquired. Also, the first message1431may include request information for data related to the event. Also, the controller of the vehicle may receive an input from an occupant in the vehicle in response to the first message1431. In this case, the controller may receive an input for accepting the transmission of data related to the event from the occupant or may receive an input for rejecting the transmission of data related to the event. When the controller receives the input for accepting the transmission of the data related to the event, the controller may generate sharing data including at least one subset of point data representing at least a portion of an object related to the event and may transmit the sharing data to the server or the object related to the event. FIG.72is a diagram illustrating information included in a second message according to an embodiment. Referring toFIG.72, a second message1432received from a server through a message window1430included in at least one infotainment system of a vehicle may be displayed. Also, the second message1432may include position information related to the occurrence position of the event. In this case, the position information may include second information representing that the event has occurred in a first region. Also, a controller of the vehicle may recognize that the event has occurred at at least one of a time point at which the second message1432is received, a time point at which the position information is acquired, or a time point at which the second information is acquired. Also, the second message1432may include at least a portion of information included in the first message. For example, the second message1432may include time information representing that the event has occurred at a first time, but the present invention is not limited thereto. Also, in some embodiments, a server which has recognized the occurrence of an event may transmit a message requesting that data should be continuously shared between a device related to the event and a device located near the vehicle. For example, when a server recognizes that an environmental event, such as sudden rain, has occurred, the server may transmit a message requesting that data should be continuously shared between a plurality of vehicles in relation to the environmental event. Also, in some embodiments, the server may recognize that a sensor failure event has occurred in an autonomous vehicle where at least one sensor is placed. In this case, in order to prevent the risk of an accident of the autonomous vehicle that may occur due to a sensor failure, the server may transmit a message requesting that data should be shared with the autonomous vehicle to a vehicle located near the autonomous device. 4.2.4. Selective Sharing Method (3) of Sensor Data According to Embodiment FIG.73is a diagram illustrating an example related to a selective sharing method for sensor data depending on the range. Referring toFIG.73, a second device and a third device may acquire a set of point data using at least one sensor (S5047). Also, the second device, which is located in a third range included in an available communication range from a region where the traffic event has occurred, may transmit first sharing data including a set of point data to a first device (S5048). In this case, the third range may refer to a range in which data related to the traffic event can be acquired. Also, the third range may be determined in the same manner as the above-described first range determination scheme included in Section 4.2.3.1. Also, the content of the first sharing data may include the set of point data, but the present invention is not limited thereto. The content of the first sharing data may include at least one of the set of point data, at least one subset of point data included in the set of point data, or property data of the at least one subset of point data, and information regarding the second device, but the present invention is not limited thereto. Also, a third device, which is located in an available communication range from the region where the traffic event has occurred and is located outside the third range, may transmit second sharing data including position information of the third device to the first device (S5049). In this case, the available communication range may refer to a predetermined region where it is possible to communicate with an object related to the traffic event to share data. For example, the available communication range may include a region where a vehicle related to the traffic event can communication with other devices through a V2X system. Also, the content of the second sharing data may include position information of the third device, but the present invention is not limited thereto. The content of the second sharing data may include basic information regarding the third device. In this case, the position information of the third device may include GPS information of the third device. Also, the position information of the third device may include the position coordinates of the third device which are acquired from at least one sensor included in a fourth device located near the third device. 4.2.5. Selective Sharing Method (4) of Sensor Data According to Embodiment FIG.74is a diagram illustrating a selective data sharing method according to a blind spot during the driving of a vehicle in relation to a regulatory event according to an embodiment. Referring toFIG.74, a first device (e.g., a vehicle) may enter a specific regulation region such as a child protection zone (S5050). Here, the specific regulation region may refer to a region to which legal or customary regulations are applied in the first device compared to other regions. For example, the child protection zone may refer to a region where the driving speed of a vehicle is regulated to a predetermined speed or less and in which special attention is required for the safety of pedestrians including children in order to protect children from the vehicle. Thus, the first device may need to more accurately recognize the positions or movements of nearby pedestrians in the child protection zone than in other zones. Also, the first device may request a second device located in the child protection zone to determine whether a blind spot where an object cannot be recognized is in the field of view of a sensor of the first device. Also, the second device (e.g., an infrastructure device) located in the child protection zone may notify the first device that the first device has entered the child protection zone (S5051). In this case, the method of the second device notifying the first device that the first device has entered the child protection zone may include transmitting a notification message indicating that the above-described regulatory event has occurred, but the present invention is not limited thereto. However, the present invention is not limited thereto, and the first device may notify the second device that the first device has entered the child protection zone. Also, when the first device enters the child protection zone, the second device may transmit first sharing data to the first device (S5052). In this case, the content of the first sharing data may include sensor data acquired from at least one sensor placed in the second device, data other than the sensor data, etc., but the present invention is not limited thereto. For example, the sensor data may include a set of point data, a subset of point data, property data of the subset of point data, etc., but the present invention is not limited thereto. Also, the second device may detect a blind spot related to the first device (S5053). In this case, the method of the second device detecting a blind spot related to the first device may include various methods. As an example, the first device may detect a blind spot related to the first device by itself and transmit blind spot-related information to the second device. As a specific example, when the first device is a vehicle, the vehicle may compare a high-definition map received from the outside to sensor data acquired from at least one sensor placed in the vehicle and may determine that a blind spot is present when an object that is not included in the sensor data is included in the high-definition map. In this case, the vehicle may transmit information related to the presence of a blind spot to the second device. However, the present invention is not limited thereto, and the first device may detect a blind spot on the basis of a ratio of ground-related data to non-ground data in sensor data acquired through at least one sensor placed in the first device. In detail, when the proportion of the non-ground data covered by an object included in the sensor data acquired by the first device is greater than or equal to a predetermined proportion, the first device may determine that a blind spot is present and may transmit information related to the presence of the blind spot to the second device. As another example, when the first device enters a specific regulation region such as a child protection zone, the second device may determine that a blind spot related to the first device is present regardless of whether the blind spot related to the first device is actually present. Specifically, since the risk of collision between the first device and a pedestrian is high in a specific regulation region such as a child protection zone, the second device may determine that a blind spot related to the first device is present when the first device enters the child protection zone. However, the present invention is not limited thereto, and the second device may determine the presence of a blind spot related to the first device on the basis of sensor data acquired through at least one sensor placed in the second device. In other words, the second device may determine the presence of the blind spot related to the first device on the basis of a positional relationship between a plurality of objects including the first device included in the sensor data acquired by the second device. In detail, when the second device determines that the first device cannot recognize a specific object included in the sensor data on the basis of position information of the first device, the second device may determine the presence of the blind spot related to the first device. Also, when the blind spot related to the first device is detected, the second device may transmit second sharing data to the first device. In this case, the content of the second sharing data may include a subset of point data representing at least a portion of an object located in the blind spot of the first device, but the present invention is not limited thereto. The content of the second sharing data may include property data of the subset of point data. Also, the content of the second sharing data may include all data included in the sensor data acquired by the second device as well as the data regarding the object located in the blind spot of the first device. 5. Processing and Use of Sharing Data 5.1. Overview In a data sharing system according to an embodiment, a first device may transmit sharing data including sensor data to a second device. In this case, the first device and the second device may include a vehicle, a server, an infrastructure device, a mobile device, or the like, but the present invention is not limited thereto. In this case, the second device, which has received the sharing data, may process the sensor data included in the content of the sharing data, and the processed sensor data may be utilized to control the second device, an apparatus including the second device, or the like. For example, when the second device is a LiDAR device and an apparatus including the second device is a vehicle, a controller of the LiDAR device or a controller of the vehicle may process sensor data included in the content of the sharing data to control the LiDAR device or control the vehicle. In the following description, for convenience of description, an entity that implements the description in Section 5 is expressed as the controller of the vehicle, but the present invention is not limited thereto. It will be appreciated that the controller of the second device or the controller of the apparatus including the second device may also be an entity that implements the description in Section 5. Also, the content of the sharing data may include a set of point data, a subset of point data, property data, etc., but the present invention is not limited thereto. Also, the content of the sharing data may include data other than the sensor data. For example, the content of the sharing data may include traffic event-related information, position information of the first device, or a destination of the first device, etc., but the present invention is not limited thereto. Also, the controller of the second device may process sharing data differently as described in Section 3.4.1. according to the type of the content of the received sharing data. Also, in order to match the sensor data acquired from the first device included in the content of the sharing data to sensor data acquired from the second device, the controller of the second device may align a coordinate system using the scheme described in Section 3.4.2. Also, the second device may receive sharing data from the first device in order to acquire information regarding an object placed in a region where sensor data cannot be acquired (e.g., a blind spot) in the field of view of at least one sensor included in the second device. For example, when a running vehicle enters a child protection zone, the vehicle may receive, from an infrastructure device placed in the child protection zone, sharing data including sensor data acquired from a sensor of the infrastructure device in order to acquire information regarding an object not included in sensor data acquired from a sensor placed in the vehicle. 5.2. Various Embodiments of Processing and Using Sensor Data and Sharing Data 5.2.1. Case in which Set of Point Data is Included in Sharing Data In a data sharing system according to an embodiment, a first device may transmit sharing data including a set of point data acquired from a sensor to a second device. In this case, the second device may process the received set of point data in the same scheme as described in Section 3.4.1.1. For example, referring toFIGS.36to38again, an infrastructure device700may transmit sharing data3000including a first set of point data3100to a first vehicle122. For convenience of description, when the elements shown inFIGS.36to38correspond to the elements described in Section 5.2.1, the infrastructure device700described with reference toFIGS.36to38may correspond to the first device described in Section 5.2.1, and the first vehicle122described with reference toFIGS.36to38may correspond to the second device described in Section 5.2.1. In this case, the controller of the first vehicle122may acquire information regarding an object included in a plurality of sets of point data using a second set of point data2100acquired from a sensor placed in the first vehicle122and a first set of point data3100included in the sharing data acquired from the infrastructure device700. 5.2.2. Case in which Property Data is Included in Sharing Data. Referring toFIG.39again, the sharing data3000may include property data of a subset of point data representing at least a portion of an object. In this case, the property data may include center position information, size information, shape information, movement information, identification information, etc., but the present invention is not limited thereto. 5.2.2.1. Processing of Sharing Data and Aligning of Coordinate System According to Embodiment FIG.75is a flowchart illustrating a scheme of processing property data included in sharing data according to an embodiment. Referring toFIG.75, a controller of a vehicle may acquire a first set of point data through at least one sensor placed in the vehicle (S5055). Also, the controller of the vehicle may determine first property data of a first subset of point data included in the first set of point data (S5056). Also, the controller of the vehicle may generate first standard property data on the basis of the first property data (S5057). Also, a first device may acquire a second set of point data through at least one sensor placed in the first device (S5058). Also, a controller of the first device may determine second property data of a second subset of point data included in the second set of point data (S5059). Also, the controller of the first device may transmit sharing data including the second property data to the vehicle (S5060). Also, the controller of the vehicle may generate second standard property data using the second property data received from the first device (S5061). Also, the controller of the vehicle may control the vehicle on the basis of the first standard property data and the second standard property data (S5062). Hereinafter, the operations described with reference toFIG.75will be described in detail. 5.2.2.1.1. Acquisition of Set of Point Data and Property Data FIG.76is a diagram showing a situation in which a vehicle and an infrastructure device acquire sensor data to perform data sharing according to an embodiment. Referring toFIG.76, a vehicle140and an infrastructure device700may acquire, through at least one sensor, sensor data including information regarding at least one object placed in the field of view of the sensor. In detail, a controller of the vehicle140may acquire a first set of point data through at least one sensor placed in the vehicle and may determine first property data of the first subset of point data representing at least a portion of a building500included in the first set of point data (S5055, S5056). In this case, the first set of point data may not include information regarding a pedestrian800covered by the building500. Also, the first property data (see2208inFIG.77) may include center position information, size information, shape information, and the like of the first subset of point data, but the present invention is not limited thereto. Also, considering the location of the infrastructure device700, the infrastructure device700may measure the pedestrian800and the building500using at least one sensor. In this case, the infrastructure device700may determine a second set of point data through at least one sensor placed in the infrastructure device700, and a controller of the infrastructure device700may determine second property data of a second subset of point data representing at least a portion of the pedestrian800included in the second set of point data. Also, the second set of point data may include a third subset of point data representing at least a portion of the building500. In this case, since the second subset of point data represents at least a portion of the pedestrian800not included in the first set of point data, the infrastructure device700may transmit the second subset of point data or the second property data of the second subset of point data to the vehicle140in order to prevent the risk of collision with the pedestrian800that may occur while the vehicle is traveling. 5.2.2.1.2. Generation of Standard Property Data Set of point data and property data included in sensor data acquired from at least one sensor may be shown in a coordinate system based on any origin. In this case, the origin may correspond to the position of the sensor that has acquired the set of point data and the property data. For example, the origin may correspond to the optical origin of a LiDAR device that has acquired the sensor data, but the present invention is not limited thereto. FIG.77is a diagram illustrating a method in which a controller of a vehicle shows first property data and first standard property data in a first local coordinate system and a global coordinate system, respectively, according to an embodiment. Referring toFIG.77, first property data2208may be shown in a first local coordinate system9100based on a first origin O1. However, the present invention is not limited thereto, and the first set of point data and the first subset of point data may also be shown in the first local coordinate system9100. As a specific example, when the first property data2208includes center position information of the first subset of point data, the center position coordinates of the first subset of point data included in the center position information may be shown in the first local coordinate system9100. In this case, the first origin O1may correspond to the position of the sensor that has acquired the first set of point data. For example, when the vehicle140acquires the first set of point data through a LiDAR device, the first origin O1may correspond to the optical origin of the LiDAR device. Also, the first origin O1may correspond to the position of the vehicle140. For example, a controller of the vehicle140may set the first origin O1on the basis of GPS position information of the first vehicle140. Also, the first origin O1may correspond to the position of the center of gravity of the vehicle140, the position of the center of gravity of the sensor, or the like, but the present invention is not limited thereto. Also, referring toFIG.77again, the controller of the vehicle140may generate first standard property data2501on the basis of the first property data2208(S5057). Here, the standard property data represents data for matching the positions of various pieces of property data to a single coordinate system, and the first standard property data2501generated based on the first property data2208and the second standard property data (see3502inFIG.78) generated based on second property data (see3202inFIG.78) may have the same origin. In an example ofFIG.77, the first standard property data2501may be shown in the global coordinate system9200based on a second origin O2. In detail, the controller of the vehicle140may generate the first standard property data2501by aligning the first property data2208shown in the first local coordinate system9100with the global coordinate system9200. In this case, the controller of the vehicle140may align the first local coordinate system9100with the global coordinate system9200on the basis of the scheme described in Section 3.4.2. However, the present invention is not limited thereto, and the controller of the vehicle140may set the first local coordinate system9100as a global coordinate system. In this case, the origin of the first local coordinate system9100may be the same as the origin of the local coordinate system. In other words, when the first local coordinate system9100is set as a global coordinate system, the position of the second origin O2may match the position of the first origin O1. More specifically, the controller of the vehicle140may set the first local coordinate system9100as a global coordinate system based on the first origin O1without changing the position of the origin of the first local coordinate system9100. As a specific example, when the first property data2208includes center position information of the first subset of point data, the controller of the vehicle140may show, in the global coordinate system9200, the coordinate position coordinates of the first subset of point data included in the center position information. Also, the global coordinate system9200may include a predetermined origin. In this case, the predetermined origin may refer to the origin of the coordinate system based on GPS position information. Also, the second origin O2may correspond to the optical origin of a LiDAR device included in the vehicle140. Also, when the first local coordinate system9100is set as a global coordinate system, the position of the second origin O2may match the position of the first origin O1. FIG.78is a diagram illustrating a method in which a controller of a vehicle generates second standard property data on the basis of second property data shown in a second local coordinate system according to an embodiment. Referring toFIG.78, a controller of the infrastructure device may show second property data3202and third property data3203in a second local coordinate system9300based on a third origin O3. Here, the second local coordinate system9300has a different origin from the first local coordinate system, and the second local coordinate system9300and the first local coordinate system9100may have the same coordinate system type (e.g., the second local coordinate system9300and the first local coordinate system9100are Cartesian coordinate systems) and may also have different coordinate system types (e.g., the second local coordinate system9300is a polar coordinate system, and the first local coordinate system9100is a Cartesian coordinate system). Also, the second property data3202may be determined based on a second subset of point data representing at least a portion of the pedestrian800ofFIG.76, and the third property data3203may be determined based on a third subset of point data representing at least a portion of the building500ofFIG.76. However, the present invention is not limited thereto, and the second set of point data, the second subset of point data, and the third subset of point data may be shown in the second local coordinate system9300. In this case, the third origin O3may correspond to the position of the sensor that has acquired the second set of point data. For example, when the infrastructure device700acquires the second set of point data through a LiDAR device, the third origin O3may correspond to the optical origin of the LiDAR device. Also, the third origin O3may correspond to the position of the infrastructure device700. For example, the controller of the infrastructure device700may set the third origin O3on the basis of GPS position information of the infrastructure device700. Also, the third origin O3may correspond to the position of the center of gravity of the infrastructure device700, the position of the center of gravity of the sensor, or the like, but the present invention is not limited thereto. Also, the controller of the infrastructure device700may transmit sharing data including the second property data3202to the vehicle140(S5060). In this case, the second property data3202may be included in the second set of point data and may be determined based on a second subset of point data representing at least a portion of a pedestrian which is not included in the first set of point data. Also, the content of the sharing data may further include the third property data3203. In this case, the third property data3203may be determined based on a third subset of point data representing at least a portion of a building included in the first set of point data and the second set of point data. In some embodiments, it will be appreciated that the content of the sharing data may not include the third property data3203. However, the present invention is not limited thereto, and the content of the sharing data may further include basic information of the infrastructure device700or the like. Also, referring toFIG.78again, the controller of the vehicle140may generate second standard property data3502on the basis of the second property data3202included in the sharing data received from the infrastructure device700(S5061). In this case, the second standard property data3502may be shown in a global coordinate system9200based on the second origin O2. In detail, the controller of the vehicle140may generate the second standard property data3502by aligning the second property data3202shown in the second local coordinate system9300with the global coordinate system9200in which the first standard property data2501is shown. In this case, the controller of the vehicle140may align the second local coordinate system9300with the global coordinate system9200on the basis of the scheme described in Section 3.4.2. For example, when the first local coordinate system9100is set as the global coordinate system9200, the controller of the vehicle may generate the second standard property data3502by aligning the received second property data3202with the first local coordinate system9100. Also, in the method of processing and using sharing data according to an embodiment, which is shown inFIG.76, the controller of the vehicle may determine whether an object represented by at least one piece of property data included in the content of the sharing data is the same as an object represented by a first set of point data. For example, an object represented by third property data3203included in the sharing data received from the infrastructure device700may be the same as the building500represented by the first property data. In this case, the controller of the vehicle140may generate third standard property data3503on the basis of the third property data3203. In this case, the third standard property data3503may be shown in the global coordinate system9200based on the second origin O2. In detail, the controller of the vehicle140may generate the third standard property data3503by aligning the third property data3203shown in the second local coordinate system9300with the global coordinate system9200in which the first standard property data2501is shown. In this case, the controller of the vehicle140may align the second local coordinate system9300with the global coordinate system9200on the basis of the scheme described in Section 3.4.2. Also, the controller of the vehicle140acquires the third property data3203or the third standard property data3503for the same building500, and thus it is possible to implement a higher temporal resolution for the building500. In detail, by acquiring the third property data3203or the third standard property data3503from the infrastructure device700, it is possible to reinforce information regarding the building500that cannot be acquired in a certain time interval according to the frame rate of the LiDAR device placed in the vehicle140. However, the present invention is not limited thereto, and the controller of the vehicle140may not receive the third property data3203from the infrastructure device700. In detail, since a first set of point data acquired by the vehicle140through a sensor includes a first subset of point data representing at least a portion of the building500, the controller of the vehicle140may not receive a third subset of point data representing the same object and the third property data3203determined based on the third subset of point data from the infrastructure device700. Also, the controller of the vehicle140may not store the third property data3203received from the infrastructure device700. Also, when the third property data3203is received from the infrastructure device700, the controller of the vehicle140may generate the third standard property data3503without generating the first standard property data2501. Also, the controller of the vehicle140may determine whether a sensor placed in the vehicle140is abnormal on the basis of the first standard property data2501and the third standard property data3503. In detail, when the position information of the building500included in the third standard property data3503generated through the above-described coordinate system alignment method is different from the position information of the building500included in the first standard property data2501, the controller of the vehicle140may determine that the sensor placed in the vehicle140is unfastened. Also, when it is determined that the sensor is unfastened, the controller of the vehicle140may transmit a notification indicating that the sensor is unfastened to an occupant. In this case, the notification may be displayed to the occupant through an infotainment system. However, the present invention is not limited thereto, and the notification may be transmitted to the occupant through a scheme known to those skilled in the art, such as sound. Also, the sensor data processing method according to an embodiment is not limited to the operations shown inFIG.75, and the controller of the infrastructure device may generate second standard property data on the basis of second property data and may transmit sharing data including the second standard property data to the vehicle. However, the present invention is not limited thereto, and when the controller of the vehicle receives a high-definition map including the second property data, the controller of the vehicle may set the second local coordinate system as a global coordinate system. In this case, in order to match sensor data acquired from the sensor placed in the vehicle to the high-definition map, the controller of the vehicle may align a first local coordinate system in which the first property data is shown with the global coordinate system. 5.2.2.1.3. Vehicle Control Using Standard Property Data—Path Generation (Path Planning) A controller of a vehicle may control the vehicle on the basis of a plurality of pieces of standard property data. However, the present invention is not limited thereto, and the controller of the vehicle may control the vehicle on the basis of at least one of a set of point data, a subset of point data, and property data. For example, the controller of the vehicle may control the vehicle using sensor data or sharing data as described in Section 2.3. or Section 3.5. As a specific example, the controller of the vehicle may match the plurality of pieces of standard property data to a high-definition map, control the speed and direction of the vehicle, or control the path of the vehicle. In this case, the path of the vehicle may include a global path and a local path. Here, the global path may refer to a path to a destination of the vehicle which is generated based on GPS position information, but the present invention is not limited thereto. Also, the local path may refer to a path that is generated based on sensor data acquired from a sensor placed in the vehicle or sharing data, but the present invention is not limited thereto. As an example, one global path may correspond to a plurality of local paths and also may be generated by adding a plurality of local paths. However, the present invention is not limited thereto, and a global path and a local path may be formed independently. Also, the global path or the local path may include the direction of the vehicle, the speed of the vehicle, etc. In detail, the global path or the local path may include the position of the vehicle, a direction in which the vehicle is to travel, the traveling speed of the vehicle, etc., but the present invention is not limited thereto. FIG.79is a diagram illustrating a global path according to an embodiment. Referring toFIG.79, the controller of the vehicle may generate and show a global path8000in a high-definition map. In this case, the controller of the vehicle may control the vehicle to travel along the global path8000. In this case, the controller of the vehicle may generate a global path8000along which the vehicle is to travel on the basis of the location and destination of the vehicle before the vehicle starts to travel. Also, when an input for an origin and a destination of an occupant is received, the controller of the vehicle may generate a global path8000on the basis of GPS position information of the origin and the destination. Also, the controller of the vehicle may reflect traffic information between the position of the vehicle and the destination of the vehicle while generating the global path8000. As an example, the controller of the vehicle may set a path that allows the vehicle to travel from the position of the vehicle to the destination of the vehicle in the shortest time as the global path8000. As another example, the controller of the vehicle may set a path that allows the vehicle to travel from the current position of the vehicle to the destination in the shortest distance as the global path8000. Also, the global path8000may not include a detailed path in units of lanes. In detail, the global path8000may not include detailed paths that allow the controller of the vehicle to control the vehicle to change lanes. In some embodiments, it will be appreciated that the global path8000may include detailed paths in units of lanes. FIG.80is a diagram illustrating a local path and a modified path according to an embodiment. Referring toFIG.80, the controller of the vehicle may generate a local path8100along which the vehicle is to travel and then may display the local path8100in a high-definition map1420. More specifically, the controller of the vehicle may generate a local path8100related to at least a portion of the global path on the basis of sensor data for at least one object present in the field of view of at least one sensor placed in the vehicle traveling along the global path. However, the present invention is not limited thereto, and the controller of the vehicle may generate a local path8100on the basis of the sensor data and sharing data acquired from other devices. More specifically, the controller of the vehicle may generate a local path8100on the basis of sensor data for at least one object present in the field of view of a sensor placed in the vehicle traveling along the global path and sensor data acquired from a sensor placed in other devices. For example, when a vehicle located at a first point sets a second point as a destination, a controller of the vehicle may generate a global path that allows the vehicle to travel from the first point to the second point and may generate a local path8100on the basis of sensor data and sharing data which are acquired while the vehicle is traveling along the global path. Also, the local path8100may include a detailed path in units of lanes. In detail, the local path8100may include a detailed path that allows the controller of the vehicle to change lanes to travel on the next lane. Also, the local path8100may include an available movement region in a visible region of a driver. Also, the local path8100may include a region including at least one object present in the field of view of a sensor placed in the vehicle. Also, when the local path8100is generated based on sensor data acquired while the vehicle is traveling and sharing data received from other devices, the local path8100may include both of a region including at least one object in the field of view of the sensor placed in the vehicle and a region including at least one object out of the field of view of the sensor placed in the vehicle. Also, the local path8100may include a modified path8110. In detail, when the controller of the vehicle detects an obstacle threatening the vehicle on the global path or the local path of the vehicle, the controller of the vehicle may generate the modified path8110. In this case, the controller of the vehicle may set the modified path8110as a local path along which the vehicle is to travel. The modified path8110will be described in detail below (in Section 5.2.2.2.2). 5.2.2.2. Processing of Sharing Data and Generation of Path According to Embodiment According to an embodiment, a vehicle that has received sharing data may generate a path along which the vehicle is to travel on the basis of the sharing data and sensor data acquired from a sensor placed in the vehicle. FIG.81is a flowchart illustrating a method of generating or modifying, by a vehicle, a path on the basis of sharing data according to an embodiment. Referring toFIG.81, a controller of a vehicle may acquire a first set of point data through at least one sensor placed in the vehicle (S5063). Also, the controller of the vehicle may determine first property data on the basis of at least one subset of point data included in the first set of point data (S5064). Also, the controller of the vehicle may generate a local path along which the vehicle is to travel on the basis of at least a portion of the first set of point data, at least one subset of point data or the first property data (S5065). Also, a controller of a first device may acquire a second set of point data through at least one sensor placed in the first device (S5066). Also, the controller of the first device may determine second property data on the basis of the second subset of point data included in the second set of point data (S5067). Also, the controller of the first device may transmit sharing data including the second property data to the vehicle (S5068). Also, the vehicle may generate a modified path on the basis of the second property data and at least one of the first set of point data, the first property data, or the local path (S5069). Hereinafter, the operations of the method in which the vehicle generates or modifies the path on the basis of the sharing data according to an embodiment will be described in detail. 5.2.2.2.1. Generation and Sharing of Sensor Data and Sharing Data FIG.82is a diagram showing a situation in which a first vehicle travels along a path generated based on sensor data and sharing data according to an embodiment. Referring toFIG.82, a controller of a first vehicle141may acquire a first set of point data through a sensor placed in the first vehicle141, and a controller of an infrastructure device700may acquire a second set of point data through a sensor placed in the infrastructure device700(S5063, S5066). In this case, the first set of point data may include a first subset of point data representing at least a portion of a building500, and the controller of the first vehicle141may determine first property data on the basis of the first subset of point data (S5064). In this case, the first set of point data may not include information regarding a pedestrian800that is covered by the building500and thus placed out of the field of view of the sensor. Also, the first property data may include center position information, size information, movement information, shape information, and the like of the building500, but the present invention is not limited thereto. Also, the second set of point data may include a second subset of point data representing at least a portion of the pedestrian800, and the controller of the infrastructure device700may determine second property data on the basis of the second subset of point data (S5067). In this case, the second property data may include center position information, size information, movement information, shape information, and the like of the pedestrian800, but the present invention is not limited thereto. However, the present invention is not limited thereto, the second set of point data may include a third subset of point data representing at least a portion of the building500, and the controller of the infrastructure device700may determine third property data on the basis of the third subset of point data. Also, the controller of the infrastructure device700may generate sharing data on the basis of the second property data and transmit the sharing data to the first vehicle141(S5068). In detail, the infrastructure device700may transmit, to the first vehicle141, second property data generated based on a second subset of point data representing the pedestrian800which is not included in the first set of point data. In this case, the second property data may include tracking information of the pedestrian800predicted according to the movement direction and movement speed of the pedestrian800. In this case, the controller of the first vehicle141may compute the probability of collision between the first vehicle141and the pedestrian800on the basis of the tracking information. Also, the content of the sharing data may include the third property data. In some embodiments, the content of the sharing data may not include the third property data. 5.2.2.2.2. Generation and Modification of Local Path The controller of the first vehicle141may generate a local path8100on the basis of the sensor data acquired through the sensor placed in the first vehicle141(S5065). In this case, the first vehicle141may generate the local path8100before receiving the sharing data from the infrastructure device700. In some embodiments, the first vehicle141may generate the local path8100after receiving the sharing data. As an example, the controller of the first vehicle141may generate the local path8100on the basis of the first property data. In detail, the controller of the first vehicle141may control the vehicle along a global path and may generate a local path8100on the basis of sensor data for an object present in the field of view of the sensor placed in the vehicle. As a specific example, the controller of the first vehicle141may generate a local path8100for preventing collision between the vehicle and the building500on the basis of the sensor data (e.g., a first subset of point data or first property data) for the building500. It will be appreciated that in some embodiments, the controller of the first vehicle141may generate a local path on the basis of a first set of point data and a plurality of subsets of point data which are included in the sensor data. Also, the controller of the first vehicle141may generate a modified path8110on the basis of the sensor data and the sharing data (S5069). In detail, in order to avoid collision with an object not included in the sensor data, the controller of the first vehicle141may generate a modified path8110on the basis of sensor data acquired from the sensor placed in the first vehicle141and sharing data received from the infrastructure device700. In this case, the content of the sharing data may include information regarding the object not included in the first set of point data. For example, the content of the sharing data may include a second subset of point data representing at least a portion of the pedestrian800not included in the first set of point data or second property data of the second subset of point data, but the present invention is not limited thereto. Also, the controller of the first vehicle141may determine whether to generate the modified path8110on the basis of the second property data before generating the modified path8110. As an example, when the local path8100includes at least a portion of a predetermined region where the pedestrian800is located, the controller of the first vehicle141may generate the modified path8110. In other words, when the local path8100and the predetermined region where the pedestrian800is located partially overlap each other, the controller of the first vehicle141may generate the modified path8110. In this case, the predetermined region may be preset by the controller of the first vehicle141. However, the present invention is not limited thereto, and the predetermined region may be set based on the speed of the first vehicle141, the distance to the pedestrian, or the like. Also, the modified path8110may not overlap the predetermined region where the pedestrian800is located. It will be appreciated that in some embodiments, the modified path8110may partially overlap the predetermined region where the pedestrian800is located. As another example, the controller of the first vehicle141may compute the probability of collision between the first vehicle141and the pedestrian800on the basis of second property data including movement information of the pedestrian800not included in the first set of point data and may determine whether to generate the modified path8110according to the computed probability. More specifically, the controller of the first vehicle141may determine whether to modify the path of the vehicle on the basis of the probability of movement of the first vehicle141predicted based on the local path8100of the first vehicle and the probability of movement of the pedestrian800predicted based on the second property data. As a specific example, the controller of the first vehicle141may determine whether to generate the modified path8110on the basis of a collision probability map, which is generated based on the local path8100and the second property data and along with the movement of the first vehicle141and the pedestrian800. FIG.83is a diagram illustrating a method of generating a modified path on the basis of a collision probability map generated by a controller of a first vehicle according to an embodiment. Referring toFIG.83, a controller of a vehicle may generate a collision probability map that represents the probability of movement of a pedestrian and the probability of movement of the vehicle traveling along a local path8100over time. In this case, when a region8200having a high probability of collision between the vehicle and the pedestrian is shown in the collision probability map while the vehicle is traveling along the local path8100, the controller of the vehicle may determine to generate the modified path8110so as to avoid collision and may generate the modified path8110. However, the present invention is not limited thereto, and the controller of the vehicle may determine whether to generate the modified path according to whether a blind spot is present in sensor data acquired from a sensor placed in the vehicle. In detail, when a blind spot is detected in the scheme described in Section 4.2.5, a controller of a vehicle traveling along a local path may generate a modified path to avoid possible dangers due to the presence of the blind spot. For example, when a blind spot is detected, the controller of the vehicle may generate a modified path to decelerate the vehicle or change lanes, but the present invention is not limited thereto. 5.2.2.2.3. Various Examples of Modified Path Also, a controller of a vehicle may generate an optimal modified path to avoid collision between the vehicle and other objects. FIG.84is a diagram illustrating various examples of a modified path according to an embodiment. Referring toFIG.84, a controller of a second vehicle142traveling along a global path and a local path may generate at least one modified path in order to avoid a pedestrian800on the basis of the movement speed, movement direction, position, and the like of the second vehicle142. For example, the at least one modified path may include a first modified path8111for stopping the second vehicle142and a second modified path8112for changing at least a portion of the local path, but the present invention is not limited thereto. In detail, the controller of the second vehicle142may receive information (e.g., property data) regarding the pedestrian800which is not included in sensor data acquired from a sensor placed in the second vehicle142and which is included in sharing data received from other devices. In this case, the controller of the second vehicle142may generate a first modified path8111for stopping the second vehicle142in order to prevent collision between the second vehicle142and the pedestrian800. In this case, the first modified path8111may be generated to stop the second vehicle142in a predetermined time or stop the second vehicle for a predetermined time. Also, the controller of the second vehicle142may generate a second modified path8112which allows the second vehicle142to avoid the pedestrian800by changing at least a portion of the local path of the second vehicle142so as to prevent collision between the second vehicle142and the pedestrian800. However, the present invention is not limited thereto, and the controller of the second vehicle142may generate a modified path by changing at least some of the position, speed, and direction of the second vehicle142which are included in the local path. When the controller of the vehicle generates a modified path including the position, speed, or direction of the vehicle, the controller of the vehicle may set the modified path as a local path and may control the vehicle on the basis of the local path. Also, the modified path may include a path obtained by modifying at least a portion of the global path. In detail, when information indicating that a specific event has occurred on the global path of the vehicle, the controller of the vehicle may generate a modified path reflecting the information indicating that the event has occurred and may set the modified path as a new global path. As an example, the controller of the vehicle may control the vehicle along a first global path which is generated based on the current position of the vehicle and the position of the destination of the vehicle. In this case, when the controller of the vehicle receives sharing data including information related to a traffic event that has occurred at a specific time related to the first global path from another device, the vehicle may generate a modified path such that the vehicle can avoid the region where the traffic event has occurred. In this case, the controller of the vehicle may set the modified path as a second global path and control the vehicle along the second global path. As described above, the modified path may refer to a path obtained by modifying at least a portion of the global path or the local path. However, the present invention is not limited thereto, and the modified path may refer to a path for suddenly stopping the vehicle. Also, the controller of the vehicle may set the modified path as a new global path for the vehicle or a new local path for the vehicle. 5.2.3. Case in which Information Related to Traffic Event is Included in Sharing Data The content of the sharing data according to an embodiment may include information related to a traffic event such as a traffic accident. In this case, the traffic event-related information may refer to information indicating that at least one object is associated with a traffic event. However, the present invention is not limited thereto, and the traffic event-related information may refer to a message that requests information regarding the traffic event or the like. In this case, a device which has received the traffic event-related information may display data (e.g., an event occurrence region) included in the traffic event-related information in a high-definition map. For example, in order to notify an occupant of information related to the traffic event, a controller of a vehicle that has received the traffic event-related information may display a region where the traffic event has occurred in a high-definition map. Also, the device which has received the traffic event-related information may change property data (e.g., class information) of objects related to the event using the traffic event-related information. As a specific example, a controller of a first vehicle may determine that class information of a second vehicle included in sensor data acquired through at least one sensor included in the first vehicle is “vehicle.” In this case, when the second vehicle is an object related to a traffic event, the controller of the first vehicle may receive information indicating that the second vehicle is related to the traffic event from the second vehicle. In this case, the controller of the first vehicle may change class information of the second vehicle to “accident,” “accident vehicle,” “accident site,” “accident point,” or the like, but the present invention is not limited thereto. Also, the controller of the first vehicle may control the first vehicle on the basis of the changed class information of the second vehicle. For example, the controller of the first vehicle may generate a local path not including a region related to the second vehicle related to the traffic event, but the present invention is not limited thereto. 6. Various Applications Using Sensor Data and Sharing Data The method of selectively sharing and processing the sensor data and the sharing data according to the above embodiment may be used in various applications. As an example, the method of selectively sharing and processing the sensor data and the sharing data may be used for a black box (a dash cam). In this case, a vehicle including a black box using a LiDAR may store a set of point data acquired using the LiDAR in a memory of the black box or a memory included in the vehicle. However, as described above, in order to solve a storage capacity issue of a memory and a privacy invasion issue caused by intensity information of an object acquired from a LiDAR, the controller of the vehicle may selectively store the set of point data. For example, the controller of the vehicle may store a set of point data other than the intensity information of the object, but the present invention is not limited thereto. The controller of the vehicle may generate and store privacy protection data obtained by partially processing a subset of point data representing at least a portion of the object. Also, when a vehicle is related to a traffic event such as traffic accident, the vehicle may receive sharing data including privacy protection data according to class information of the object related to the traffic event from a nearby device or may selectively receive only data related to a movable object as described above. In this case, the controller of the vehicle may reconfigure the traffic event on the basis of the sharing data. Also, as described above, a vehicle located near the region where the traffic event has occurred may receive a request for sensor data related to the traffic event from a server, and a controller of the vehicle may transmit sharing data related to the traffic event to the server in response to the request. In this case, the server may reconfigure the traffic event on the basis of the sharing data related to the traffic event. Also, as described above, a device which has received the sharing data related to the traffic event may match a plurality of pieces of data by aligning the coordinate systems of the sharing data and the sensor data with a signal coordinate system. In this case, the device may reconfigure the traffic event by listing, in chronological order, sensor data and sharing data which are acquired for a predetermined time before and after the traffic event. As another example, as described above, the method of selectively sharing and processing the sensor data and the sharing data may be used to detect a blind spot which refers to a region where information cannot be acquired from a sensor placed in the vehicle. In detail, in order to acquire information regarding an object that is placed in the field of view of a sensor placed in a vehicle and that is covered by another object and thus is not included in sensor data, a controller of the vehicle may receive sharing data including the information regarding the object not included in the sensor data from other devices. In this case, the device which has transmitted the sharing data to the vehicle may selectively generate the content of the sharing data on the basis of class information of an object included in the sensor data. Also, the device may selectively generate the content of the sharing data according to whether an event related to the vehicle has occurred. Also, a vehicle that has received the sharing data may match data on an object located in the blind spot and sensor data acquired by the sensor placed in the vehicle through coordinate system alignment. In this case, the controller of the vehicle may control the vehicle on the basis of the matched sensor data and sharing data. As still another example, the method of selectively sharing and processing the sensor data and the sharing data may be used to detect an available parking space of a vehicle as described above. As a specific example, when a vehicle enters a parking lot, the vehicle may receive information regarding the available parking space from an infrastructure device placed in the parking lot. In this case, the controller of the vehicle may autonomously park the vehicle in the available parking space using an autonomous parking system and a system for communication with the infrastructure device. Section 6 illustrates that the above descriptions in Sections 1 to 5 are applicable to some applications, and it will be appreciated that the descriptions in Sections 1 to 5 except for the description in Section 6 are also applicable to the applications. Also, it will be appreciated that the above descriptions in Sections 1 to 5 are applicable to applications (e.g., a traffic control system and any mode of transportation (drone, ship, train, etc.) other than vehicles) other than the applications described in Section 6. The method according to an embodiment may be implemented in the form of program instructions executable by a variety of computer means and may be recorded on a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like alone or in combination. The program instructions recorded on the medium may be designed and configured specifically for an embodiment or may be publicly known and usable by those who are skilled in the field of computer software. Examples of the computer-readable medium include a magnetic medium, such as a hard disk, a floppy disk, and a magnetic tape, an optical medium, such as a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), etc., a magneto-optical medium such as a floptical disk, and a hardware device specially configured to store and perform program instructions, for example, a read-only memory (ROM), a random access memory (RAM), a flash memory, etc. Examples of the computer instructions include not only machine language code generated by a compiler, but also high-level language code executable by a computer using an interpreter or the like. The hardware device may be configured to operate as one or more software modules in order to perform the operations of an embodiment, and vice versa. Although the present disclosure has been described with reference to specific embodiments and drawings, it will be appreciated that various modifications and changes can be made from the disclosure by those skilled in the art. For example, suitable results may be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Therefore, other implementations, embodiments, and equivalents are within the scope of the following claims. | 309,798 |
11858494 | DESCRIPTION OF EMBODIMENTS In order to make the purposes, the technical solutions and the advantages of the embodiments of the present disclosure more clearly, the following clearly and completely describes the technical solutions of the embodiments of the present disclosure with reference to the accompanying drawings of the embodiments of the present disclosure. Obviously, the described embodiments are simply part of embodiments of the present disclosure, rather than all of the embodiments. Based on the embodiments of the present disclosure, all other embodiments obtained by those of ordinary skill in the art without paying creative effort are within the protection scope of the present disclosure. The terms “first”, “second”, “third”, “fourth” and the like (if present) in the description, claims and the above drawings of the present disclosure are used to distinguish similar objects rather than to describe a specific sequence or order. It should be understood that the data used in this way may be interchanged in suitable situations, such that the embodiments of the present application described herein may be implemented in a sequence other than those illustrated or described herein. In addition, the terms “include” and “have” and any variations thereof are intended to cover a non-exclusive inclusion. For example, processes, methods, systems, products, or devices that include a series of steps or units are not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to such processes, methods, products or devices. In the prior art, during the traveling of the unmanned vehicle, an existing automatic driving system is adopted to control the traveling of the unmanned vehicle, if there is an obstacle ahead at this time, the existing automatic driving system may not handle the situation, such that the unmanned vehicle continues traveling and collides with the obstacle. As a result, the safety of the unmanned vehicle during travelling is not high. In order to improve the safety of the unmanned vehicle during travelling, the embodiments of the present disclosure provide a method for processing vehicle driving mode switching, a vehicle and a server. A target switching reason is determined at first upon detecting that a driving mode of a vehicle is switched from unmanned driving to manned driving, and status information and/or traveling environment information of the vehicle corresponding to the target switching reason is acquired; then the status information and/or the traveling environment information, and the target switching reason are sent to a server such that the server analyzes the target switching reason, and the automatic driving system is improved continuously according to the analysis result. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid dangers caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. The technical solution of the present disclosure and how the technical solution of the present disclosure solves the above technical problem will be described in detail below with specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes will not be repeated in some embodiments. The embodiments of the present disclosure will be described below in conjunction with the accompanying drawings. The embodiments of the present disclosure provide a method for processing vehicle driving mode switching, a vehicle and a server, so as to improve the safety of the unmanned vehicle during travelling. FIG.1is a schematic flowchart of a method for processing vehicle driving mode switching provided by an embodiment of the present disclosure. As an example, please refer toFIG.1, a processing method for the driving mode switching of the vehicle may include: S101, the unmanned vehicle receives a triggering instruction. Where the triggering instruction is configured to indicate that a driving mode is switched from unmanned driving to manned driving. Illustratively, the triggering instruction may be sent by a safety supervisor on the unmanned vehicle, or certainly, it may also be sent by a user. Taking the safety supervisor as an example, when sending a triggering instruction, the safety supervisor may send the triggering instruction through a button on the unmanned vehicle, or send the triggering instruction through a virtual button on a screen of the unmanned vehicle, the triggering instruction may also be sent by a voice system of the unmanned vehicle of course, which may be specifically set based on actual needs. The embodiment of the present disclosure does not make further limitations herein on how to send the triggering instruction. For the safety supervisor on the unmanned vehicle, when a driving behavior of the unmanned vehicle is determined to be dangerous, the triggering instruction may be sent through the button on the unmanned vehicle, so that the unmanned vehicle switches the current driving mode of the unmanned vehicle from the unmanned driving to the manned driving after receiving the triggering instruction, that is, executes following S102. S102, the unmanned vehicle switches the driving mode of the vehicle from the unmanned driving to the manned driving according to the triggering instruction. After receiving the triggering instruction, the unmanned vehicle may switch the driving mode of the vehicle from the unmanned driving to the manned driving automatically. S103, the unmanned vehicle determines a target switching reason upon detecting that the driving mode of the vehicle is switched from the unmanned driving to the manned driving. Optionally, the switching reason includes at least one of the following reasons: being unable to avoid an obstacle, being about to crash, needing to slow down, needing to speed up, needing to stop, being about to violate a traffic rule or deviating from a traveling lane. Certainly, the switching reason may also include other dangerous reasons. The embodiment of the present disclosure simply takes the switching reason which may include at least one of the foregoing reasons as an example for illustration, but it does not mean that the embodiments of the present disclosure are limited thereto. As an example, the wording “at least one” may refer to one or more, and may be specifically set according to actual needs. Herein, the value of the “at least one” is not further limited in the embodiment of the present disclosure. It should be noted that, in an embodiment of the present disclosure, when the unmanned vehicle detects that the driving mode of the vehicle is switched from the unmanned driving to the manual driving, the target switching reason may be determined by at least two possible implementation methods as follows. In a possible implementation, the unmanned vehicle may display at least one switching reason corresponding to the driving mode upon detecting that the driving mode of the vehicle is switched from the unmanned driving to the manned driving. Correspondingly, the safety supervisor selects a switching reason among the at least one switching reason corresponding to the driving mode, and sends a selecting instruction to the unmanned vehicle, so that the unmanned vehicle may determine the switching reason selected by the safety supervisor as the target switching reason according to the selecting instruction, thereby determining the target switching reason. Optionally, in this possible implementation, when sending the selecting instruction, the safety supervisor may send the selecting instruction to the unmanned vehicle by voice, or send the selecting instruction to the unmanned vehicle by clicking the button on the screen. Certainly, the selecting instructions may also be sent to the unmanned vehicle through text. Herein, the embodiment of the present disclosure simply takes these three ways for sending the selecting instruction to the unmanned vehicle as an example for illustration, but it does not mean that the embodiments of the present disclosure are limited thereto. Illustratively, in this possible implementation, during the travelling of the unmanned vehicle, when discovering that there is an unavoidable obstacle ahead of the unmanned vehicle, the safety supervisor may send a triggering instruction through a button on the unmanned vehicle, such that when the unmanned vehicle receives the triggering instruction, the unmanned vehicle switches the current driving mode of the unmanned vehicle from the unmanned driving to the manned driving. At this time, after detecting that the driving mode of the unmanned vehicle is switched from the unmanned driving to the manned driving, the unmanned vehicle may display the following switching reasons to the safety supervisor on the screen of the vehicle: being unable to avoid an obstacle, being about to crash, needing to slow down, needing to speed up, needing to stop, being about to violate a traffic rule or deviating from a traveling lane. The safety supervisor may input a selecting instruction to the vehicle by clicking the virtual button corresponding to “being unable to avoid an obstacle”, so that the unmanned vehicle may determine the “being unable to avoid an obstacle” as the target switching reason according to the selecting instruction. In another possible implementation, upon detecting that the driving mode of the unmanned vehicle is switched from the unmanned driving to the manned driving, the unmanned vehicle may not need to display at least one switching reason corresponding to the driving mode, instead, the safety supervisor may send an instruction including the target switching reason directly, such that the unmanned vehicle receives the target switching reason directly from the safety supervisor, thereby determining the target switching reason. It should be noted that before the safety supervisor sends the instruction including the target switching reason, the target switching reason input by the safety supervisor should be normalized at first, so that the unmanned vehicle can recognize the target switching reason and further determine the target switching reason. Optionally, in this possible implementation, when sending the instruction including the target switching reason, the safety supervisor may send the instruction including the target switching reason to the unmanned vehicle by voice, or send the instruction including the target switching reason to the unmanned vehicle by clicking a preset reason switching button on the screen. Certainly, the safety supervisor may also send the instruction including the target switching reason to the unmanned vehicle through text. Herein, the embodiments of the present disclosure simply take these three ways for sending the instruction including the target switching reason to the unmanned vehicle as an example for illustration, but it does not mean that the embodiments of the present disclosure are limited thereto. For example, in this possible implementation, during the travelling of the unmanned vehicle, when discovering that there is an unavoidable obstacle ahead of the unmanned vehicle, the safety supervisor may send a triggering instruction through the button on the unmanned vehicle, such that when the unmanned vehicle receives the triggering instruction, the unmanned vehicle switches the current driving mode of the unmanned vehicle from the unmanned driving to the manned driving. At this time, after detecting that the driving mode of the unmanned vehicle is switched from the unmanned driving to the manned driving, the unmanned vehicle may not need to display the following switching reasons on the screen of the vehicle to the safety supervisor: being unable to avoid an obstacle, being about to crash, needing to slow down, needing to speed up, needing to stop, being about to violate a traffic rule or deviate from a traveling lane, instead, the safety supervisor may send the instruction including the “being unable to avoid an obstacle” directly, such that the unmanned vehicle receives the “being unable to avoid an obstacle” directly from the safety supervisor, thereby determining the “being unable to avoid an obstacle” as the target switching reason. It should be understood that, in the embodiments of the present disclosure, the above two possible implementations are simply used as examples for illustrating the determination of the target switching reason by the unmanned vehicle, but it does not mean that the embodiments of the present disclosure are limited thereto. S104, the unmanned vehicle acquires status information and/or traveling environment information of the vehicle corresponding to the target switching reason. While acquiring information, only the status information of the vehicle corresponding to the target switching reason may be acquired, or only the traveling environment information corresponding to the target switching reason may be acquired. Certainly, the status information and traveling environment information of the vehicle corresponding to the target switching reason may also be acquired simultaneously. It should be noted that, in the embodiments of the present disclosure, the more types of information acquired are in a positive correlation with the higher accuracy of an analysis result obtained by analyzing the target switching reason according to the information by the server. That is, the more types of information are acquired, the more accurate the analysis result obtained by analyzing the target switching reason according to the information by the server would be. Optionally, the status information of the vehicle includes at least one of the following information: status information of a brake pedal, status information of a gas pedal, status information of a steering wheel, speed information, traveling position information or traveling direction information. Certainly, other status information of the vehicle may also be included. Herein, the embodiments of the present disclosure only take the status information of the vehicle including at least one of these types as an example, but it does not mean that the embodiments of the present disclosure are limited thereto. Optionally, the traveling environment information of the vehicle includes at least one of the following information: road information of a traveling road surface, information of an obstacle, position information of the vehicle, brightness of a traveling section, visibility information or information of a traffic signal on the travelling section. Certainly, other traveling environment information may also be included. Herein, the embodiments of the present disclosure only take the traveling environment information including at least one of these types as an example, but it does not mean that the embodiments of the present disclosure are limited thereto. Optionally, in an embodiment of the present disclosure, before acquiring the status information and/or the traveling environment information of the vehicle corresponding to the target switching reason, a correspondence between a switching reason and, status information and/or traveling environment information of the vehicle may be established and stored in advance. In this way, after determining the target switching reason, according to the pre-stored correspondence between the switching reason and, the status information and/or the traveling environment information of the vehicle, the status information and/or the traveling environment information of the vehicle corresponding to the target switching reason may be determined from the status information and/or the traveling environment information of the vehicle, so as to acquire the status information and/or the traveling environment information of the vehicle corresponding to the target switching reason. It should be noted that, in the embodiments of the present disclosure, there is no need to establish a correspondence between the switching reason and, the status information and/or the traveling environment information of the vehicle each time before acquiring status information and/or traveling environment information of the vehicle corresponding to a target switching reason, instead, the correspondence between the switching reason and, the status information and/or the traveling environment information of the vehicle needs to be established and stored before acquiring status information and/or traveling environment information of the vehicle corresponding to a target switching reason for the first time. In case of a new switching reason, and its corresponding status information and/or the traveling environment information of the vehicle, the pre-established correspondence may be updated, and the updated correspondence between the switching reason and, the status information and/or the traveling environment information may be stored. For example, when the target switching reason is “being unable to avoid an obstacle”, the status information and/or the traveling environment information of the vehicle corresponding to the “being unable to avoid an obstacle” may be acquired. At this time, the status information of the vehicle may include a traveling speed A of the vehicle, traveling position information is: a distance to the obstacle is B meters, traveling direction information is: a direction toward the obstacle, etc.; the traveling environment information may include road information of a traveling road surface or information of the obstacle: a big rock, position information of the vehicle: a distance to the obstacle is B meters, etc. The status information and the traveling environment information of the vehicle corresponding to the “being unable to avoid an obstacle” listed herein are only examples, which should not be construed as limitations on the embodiments of the present disclosure. After acquiring the status information and/or the traveling environment information of the vehicle corresponding to the target switching reason through S104, the following S105may be executed. S105, the unmanned vehicle sends the status information and/or the traveling environment information, and the target switching reason to a server. Illustratively, the unmanned vehicle may send the status information and/or the traveling environment information, and the target switching reason to the server through a wireless network. Certainly, the status information and/or the traveling environment information and the target switching reason may be sent to the server in other ways. Herein, the embodiments of the present disclosure only take it as an example where the unmanned vehicle sends the status information and/or the traveling environment information and the target switching reason to the server through the wireless network for illustration, but it does not mean that the embodiments of the present disclosure are limited thereto. In an embodiment of the present disclosure, the unmanned vehicle sends the status information and/or the traveling environment information, and the target switching reason to the server, such that the server receives the status information and/or the traveling environment information and the target switching reason, and executes the following S106. S106, the server analyzes the target switching reason according to the status information and/or the traveling environment information and the target switching reason. After receiving the status information and/or the traveling environment information and the target switching reason, the server may analyze the target switching reason according to the status information and/or the traveling environment information and the target switching reason, and improve the automatic driving system continuously according to the analysis result. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid the danger caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. Illustratively, after acquiring the status information and/or the traveling environment information of the vehicle corresponding to a target switching reason “being unable to avoid an obstacle”, the unmanned vehicle may send the status information and/or the traveling environment information of the vehicle corresponding to the “being unable to avoid an obstacle” to the server, such that after receiving the status information and/or the traveling environment information of the vehicle corresponding to the “being unable to avoid an obstacle”, the server may analyze the target switching reason “being unable to avoid an obstacle”, and improve the automatic driving system continuously according to the analysis result. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid the danger caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. The embodiments of the present disclosure provide a method for processing vehicle driving mode switching. A target switching reason is determined at first upon detecting that a driving mode of a vehicle is switched from unmanned driving to manned driving, and status information and/or traveling environment information of the vehicle corresponding to the target switching reason is acquired; then the status information and/or the traveling environment information, and the target switching reason are sent to a server such that the server analyzes the target switching reason, and the automatic driving system is improved continuously according to the analysis result. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid dangers caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. Based on the above embodiments shown inFIG.1, in order to describe in the embodiments of the present disclosure how the server performs S106to analyze the target switching reason according to the status information and/or the traveling environment information and the target switching reason more clearly, illustratively, please refer toFIG.2;FIG.2is a schematic flowchart of another method for processing vehicle driving mode switching provided by an embodiment of the present disclosure. The method for processing driving mode switching of a vehicle may also include: S201, the server determines target history status information and/or target history traveling environment information corresponding to the target switching reason from history status information and/or history traveling environment information according to the target switching reason. After receiving the target switching reason, the server searches, according to the target switching reason, the target history status information and/or the target history traveling environment information corresponding to the target switching reason in the history status information and/or history traveling environment information received previously. And after acquiring the target history status information and/or the target history traveling environment information, the server trains the status information and/or the traveling environment information, and the target history status information and/or the target history traveling environment information to obtain a solving strategy corresponding to the target switching reason. Illustratively, when analyzes the target switching reason “being unable to avoid an obstacle” according to the target switching reason “being unable to avoid an obstacle”, and the corresponding status information and/or the traveling environment information of the vehicle, a server may search the target history status information and/or the target history traveling environment information corresponding to the target switching reason “being unable to avoid an obstacle” in the previously received history status information and/or history traveling environment information; and then execute the following S202: S202, the server trains the status information and/or the traveling environment information, and the target history status information and/or the target history traveling environment information to obtain a solving strategy corresponding to the target switching reason. Optionally, the solving strategy may be stopping the vehicle immediately, or may be bypassing the obstacle. It may also be other strategies, and can be set based on actual needs. Herein, the embodiments of the present disclosure do not specifically limit what the solution strategy may include. Illustratively, the solving strategy corresponding to the target switching reason “being unable to avoid an obstacle” may be stopping the vehicle immediately or bypassing the obstacle. After acquiring the target history status information and/or the target history traveling environment information corresponding to the target switching reason, the server may train the status information and/or the traveling environment information, and the target history status information and/or the target history traveling environment information to obtain the solving strategy corresponding to the target switching reason. In this way, the server may improve the automatic driving system continuously according to the solving strategy. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid the danger caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. Optionally, as for the server, after obtaining the solving strategy corresponding to the target switching reason, the server may further execute the following S203: S203, the server sends to the vehicle the solving strategy corresponding to the target switching reason, such that the vehicle receives the solving strategy corresponding to the target switching reason sent by the server. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid the danger caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. In a possible scenario, when a safety supervisor finds there is danger for a driving behavior of the unmanned vehicle, the safety supervisor may send a triggering instruction to the unmanned vehicle, such that after the unmanned vehicle receives the triggering instruction, the unmanned vehicle switches the automatic driving mode of the vehicle to manned driving and displays at least one switching reason corresponding to the driving mode to the safety supervisor; correspondingly, the safety supervisor selects a switching reason among the at least one switching reason, and sends a selecting instruction configured to indicate the switching reason, such that the unmanned vehicle determines the switching reason selected by the safety supervisor as the target switching reason according to the selecting instruction; and further acquires status information and/or traveling environment information of the vehicle corresponding to the target switching reason, and sends the status information and/or the traveling environment information to the server, such that after receiving the history status information and/or the history traveling environment information and the target switching reason, the server searches target history status information and/or target history traveling environment information corresponding to the target switching reason in the previously received history status information and/or history traveling environment information, and trains the status information and/or the traveling environment information, and the target history status information and/or the target history traveling environment information to obtain a solving strategy corresponding to the target switching reason; further, after obtaining the solving strategy corresponding to the target switching reason, the server sends to the vehicle the solving strategy corresponding to the target switching reason, such that the vehicle receives the solving strategy corresponding to the target switching reason sent by the server. In this way, when encountering the target switching reason next time, the improved automatic driving system may avoid the danger caused by the target switching reason, thereby improving the safety of the unmanned vehicle during travelling. FIG.3is a structural diagram of a vehicle30provided by an embodiment of the present disclosure. Illustratively, please refer toFIG.3, the vehicle30may include:a processing unit301, configured to determine a target switching reason upon detecting that a driving mode of a vehicle is switched from unmanned driving to manned driving;an acquiring unit302, configured to acquire status information and/or traveling environment information of the vehicle corresponding to the target switching reason; andan analyzing unit303, configured to send the status information and/or the traveling environment information, and the target switching reason to a server, to enable the server to analyze the target switching reason. Optionally, the vehicle30may further include a receiving unit304. For example, please refer toFIG.4,FIG.4is a structural diagram of another vehicle30provided by an embodiment of the present disclosure. The receiving unit304, configured to receive a solving strategy sent by the server corresponding to the target switching reason. Optionally, the processing unit301is specifically configured to: display at least one switching reason corresponding to the driving mode upon detecting that the driving mode of the vehicle is switched from the unmanned driving to the manned driving; and receive a selecting instruction, where the selecting instruction is configured to indicate a switching reason selected by a user; and determine the switching reason selected by the user as the target switching reason. Optionally, the receiving unit304is further configured to receive a triggering instruction, where the triggering instruction is configured to indicate that the driving mode is switched from the unmanned driving to the manned driving; andthe processing unit301is further configured to switch the driving mode of the vehicle to the manned driving according to the triggering instruction. Optionally, the switching reason includes at least one of the following reasons: being unable to avoid an obstacle, being about to crash, needing to slow down, needing to speed up, needing to stop, being about to violate a traffic rule or deviating from a traveling lane. Optionally, the status information of the vehicle30includes at least one of the following information: status information of a brake pedal, status information of a gas pedal, status information of a steering wheel, speed information, traveling position information or traveling direction information. Optionally, the traveling environment information of the vehicle includes at least one of the following information: road information of a traveling road surface, information of an obstacle, position information of the vehicle, brightness of a traveling section, visibility information or information of a traffic signal on the travelling section. Optionally, the acquiring unit302is specifically configured to acquire the status information and/or the traveling environment information of the vehicle corresponding to the target switching reason according to a pre-stored correspondence between a switching reason and, status information and/or traveling environment information of a vehicle. The vehicle30shown in the embodiments of the present disclosure may execute the technical solution of the method for processing driving mode switching on the unmanned vehicle side shown in any of the above embodiments, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of the method for processing driving mode switching, which will not be repeated herein again. FIG.5is a structural diagram of a server50provided by an embodiment of the present disclosure. For example, please refer toFIG.5, the server50may include:a receiving unit501, configured to receive status information and/or traveling environment information of a vehicle, and a target switching reason sent by the vehicle, where the target switching reason is determined upon detecting that a driving mode of the vehicle is switched from unmanned driving to manned driving; andan analyzing unit502, configured to analyze the target switching reason according to the status information and/or the traveling environment information, and the target switching reason. Optionally, the analyzing unit502is specifically configured to determine, according to the target switching reason, target history status information and/or target history traveling environment information corresponding to the target switching reason from history status information and/or history traveling environment information; and train the status information and/or the traveling environment information, and the target history status information and/or the target history traveling environment information to obtain a solving strategy corresponding to the target switching reason. Optionally, the server50may further include a sending unit503. Illustratively, please refer toFIG.6, which is a structural diagram of another server50provided by an embodiment of the present disclosure. The sending unit503is configured to send to the vehicle the solving strategy corresponding to the target switching reason. The server50shown in the embodiments of the present disclosure may execute the technical solution of the method for processing driving mode switching on the unmanned vehicle side shown in any one of the above embodiments, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of the method for processing driving mode switching, which will not be repeated herein again. FIG.7is a structural diagram of yet another vehicle70provided by an embodiment of the present disclosure. Illustratively, please refer toFIG.7, the vehicle70may include a processor701and a memory702, wherethe memory702is configured to store program instructions;the processor701is configured to read the program instructions in the memory702, and execute, according to the program instructions in the memory702, the method for processing vehicle driving mode switching on the unmanned vehicle side. The vehicle70shown in the embodiments of the present disclosure may execute the technical solution of the method for processing driving mode switching on the unmanned vehicle side shown in any one of the above embodiments, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of the method for processing driving mode switching, which will not be repeated herein again. FIG.8is a structural diagram of yet another server80provided by an embodiment of the present disclosure. For example, please refer toFIG.8, the server80may include a processor801and a memory802, wherethe memory802is configured to store program instructions;the processor801is configured to read program instructions in the memory802, and execute, according to the program instructions in the memory802, the method for processing vehicle driving mode switching on the unmanned vehicle side. The server80shown in the embodiments of the present disclosure may execute the technical solution of the method for processing driving mode switching on the server side shown in any one of the above embodiments, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of the method for processing vehicle driving mode switching on the server side, which will not be repeated herein again. An embodiment of the present disclosure also provides a computer readable storage medium. A computer program is stored on the computer-readable storage medium, when the computer program is executed by a processor, the technical solution of the method for processing driving mode switching shown in any one of the above embodiments on the unmanned vehicle side is executed, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of method for processing vehicle driving mode switching on the unmanned vehicle side, which will not be repeated herein again; or, when the computer program is executed by the processor, the method for processing vehicle driving mode switching on the server side is executed, and its implementation principle and beneficial effects are similar to the implementation principle and beneficial effects of method for processing vehicle driving mode switching on the server side, which will not be repeated herein again. The processor in the above embodiments may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, a discrete gate or a transistor logic device or discrete hardware components that may be configured to realize or execute any of the methods, steps and logic diagrams disclosed by the embodiments of the present disclosure. The general-purpose processor may be a micro-processor, or the processor may be any other regular processors, etc. The steps of the method disclosed in the embodiments of the present disclosure may be directly embodied as being executed and completed by a hardware decoding processor, or executed and realized by a combination of hardware and software modules in the decoding processor. The software modules may be located in a mature storage medium in the art, such as a random access memory (RAM), a flash memory, a read-only memory (ROM), a programmable read-only memory, an electrically erasable programmable read-only memory, a register and the like. The storage medium is located in a memory, and a processor reads instructions in the memory and completes the steps of the above method in combination with its hardware. In several embodiments provided by the present disclosure, it should be understood that the disclosed device and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other division methods in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or a communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. The units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. In addition, the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be realized in the form of hardware or hardware plus software functional units. After considering the specification and practicing the present disclosure disclosed herein, those of ordinary skill in the art will easily think of other embodiments of the present disclosure. The present disclosure aims to cover any variations, applications, or adaptive changes of the present disclosure. These variations, applications, or adaptive changes follow the general principles of the present disclosure and include common sense or conventional technical means in the technical field not disclosed in the present disclosure. The description and the embodiments are only considered as exemplary, and the true scope and spirit of the present disclosure are indicated by the following claims. It should be understood that the present disclosure is not limited to the precise structure that has been described above and shown in the drawings, and various modifications and changes may be made without departing from its scope. The scope of the present disclosure is simply limited by the attached claims. | 41,108 |
11858495 | Corresponding reference characters indicate corresponding parts throughout the several views. The exemplification set out herein illustrates an exemplary embodiment of the disclosure and such exemplification is not to be construed as limiting the scope of the disclosure in any manner. DETAILED DESCRIPTION OF THE DRAWINGS For the purposes of promoting an understanding of the principles of the present disclosure, reference is now made to the embodiments illustrated in the drawings, which are described below. The embodiments disclosed below are not intended to be exhaustive or limit the present disclosure to the precise form disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings. Therefore, no limitation of the scope of the present disclosure is thereby intended. Corresponding reference characters indicate corresponding parts throughout the several views. Referring toFIG.1, a schematic view of a vehicle10of the present disclosure is shown. Vehicle10generally includes an engine12having a crankshaft14, an engine control module (ECM)16operatively coupled to engine12and configured to control engine12, a transmission18operatively coupled to engine12through crankshaft14, a transmission control module (TCM)22operatively coupled to transmission18and ECM16, a final drive/differential assembly24operatively coupled to transmission18through a prop shaft20coupled to a transmission output (not shown), and wheels26, where wheels26are operatively coupled to final drive24through axle shafts28. While a rear-wheel drive powertrain has been illustrated, it should be appreciated that vehicle10may have a front-wheel drive powertrain without departing from the scope of the present disclosure. In various embodiments, transmission18further includes a transmission input shaft (not shown) coupled to crankshaft14of engine12, a main pump30, an auxiliary pump32, a gear and selective coupler arrangement34, and a hydraulic system36. Gear and selective coupler arrangement34includes at least one selective coupler, such as clutches and/or brakes, and at least one gear, and illustratively includes a clutch35. As is known, exemplary gear and selective coupler arrangements generally include a plurality of selective couplers and gears which may be configured to provide a plurality of different speed ratios of the transmission output shaft to the transmission input shaft. Additional details regarding an exemplary transmission are provided in U.S. Pat. No. 9,651,144, assigned to the present assignee, the entire disclosure of which is expressly incorporated by reference herein. A selective coupler is a device which may be actuated to fixedly couple two or more components together. A selective coupler fixedly couples two or more components to rotate together as a unit when the selective coupler is in an engaged configuration. Further, the two or more components may be rotatable relative to each other when the selective coupler is in a disengaged configuration. The terms “couples”, “coupled”, “coupler” and variations thereof are used to include both arrangements wherein the two or more components are in direct physical contact and arrangements wherein the two or more components are not in direct contact with each other (e.g., the components are “coupled” via at least a third component), but yet still cooperate or interact with each other. A first exemplary selective coupler is a clutch. A clutch couples two or more rotating components to one another so that the two or more rotating components rotate together as a unit in an engaged configuration and permits relative rotation between the two or more rotating components in the disengaged position. Exemplary clutches may be shiftable friction locked multi-disk clutches, shiftable form-locking claw or conical clutches, wet clutches, or any other known form of a clutch. A second exemplary selective coupler is a brake. A brake couples one or more rotatable components to a stationary component to hold the one or more rotatable components stationary relative to the stationary component in the engaged configuration and permits rotation of the one or more components relative to the stationary component in the disengaged configuration. Exemplary brakes may be configured as shiftable-friction-locked disk brakes, shiftable friction-locked band brakes, shiftable form-locking claw or conical brakes, or any other known form of a brake. Selective couplers may be actively controlled devices or passive devices. Exemplary actively controlled devices include hydraulically actuated clutch or brake elements and electrically actuated clutch or brake elements. Additional details regarding systems and methods for controlling selective couplers are disclosed in US Published Patent Application No. 2016/0047440, the entire disclosure of which is expressly incorporated by reference herein. Exemplary gear and selective coupler arrangements34are provided in exemplary multi-speed automatic transmissions, such as automatic transmissions and automated manual transmissions. Exemplary gear and selective coupler arrangements34are disclosed in U.S. Pat. No. 10,808,807, the entire disclosure of which is expressly incorporated by reference herein. Main pump30is operatively coupled to crankshaft14via a gear set or other coupler arrangement (not shown) such that main pump30is rotated by crankshaft14, and gear and selective coupler arrangement34is operatively coupled between the transmission input shaft and the transmission output shaft. Hydraulic system36includes main pump30, auxiliary pump32, and various hydraulic circuits and valves. Hydraulic system36is operatively coupled to TCM22and gear and selective coupler arrangement34to actuate various clutches and/or brakes of gear and selective coupler arrangement34, such as clutch35. TCM22is a transmission control circuit. Exemplary transmission control circuits may be microprocessor-based and include a non-transitory computer readable medium202which includes processing instructions stored therein that are executable by the microprocessor to control operation of the main pump30, the auxiliary pump32, and the gear and selective coupler arrangement34. A non-transitory computer-readable medium, or memory, may include random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (e.g., EPROM, EEPROM, or Flash memory), or any other tangible medium capable of storing information. Exemplary logic flows are disclosed herein which correspond to processing sequences executed by TCM22. In operation, main pump30is controlled by TCM22to control the configuration of gear and selective coupler arrangement34while vehicle10is moving from a first location to a second location. Auxiliary pump32is controlled by TCM22to control the configuration of gear and selective coupler arrangement34while vehicle10is stationary at the first location. Main pump30is operatively powered by its coupling to crankshaft14while auxiliary pump32is powered by a battery38of vehicle10. With reference toFIG.2, and with continued reference toFIG.1, a processing sequence100of TCM22for controlling transmission18for engine stop-start events will now be described. Processing sequence100controls transmission18for engine stop-start events based on a grade of vehicle10where auxiliary pump32is run at a calculated pump speed if sufficient pressure can be provided to gear and selective coupler arrangement34from auxiliary pump32for providing hill hold for vehicle10while stopped on the grade. Processing sequence100of TCM22determines a grade of vehicle10, as represented by block102. Additional details regarding the measurement of road grade are provided in US Published Patent Application No. 2014/0336890, filed Jun. 18, 2013, titled SYSTEM AND METHOD FOR OPTIMIZING DOWNSHIFTING OF A TRANSMISSION DURING VEHICLE DECELERATION, the entire disclosure of which is expressly incorporated by reference herein. In other embodiments, TCM22receives an indication of vehicle grade based on a GPS location of vehicle10. In other embodiments, TCM22receives an indication of vehicle grade from another system of vehicle10. Processing sequence100determines a required pressure needed for auxiliary pump32to allow gear and selective coupler arrangement34to provide hill hold for vehicle10while stopped on the determined vehicle grade, as represented by block104. Using the determined required pressure, processing sequence100continues to determine a pump flow at the required pressure, as represented by block106. A pump speed is determined from the pump flow, as represented by block108. The determined pump speed of auxiliary pump32is compared to a threshold allowable speed of auxiliary pump32, as represented by block110. In embodiments, the threshold allowable speed is the maximum allowable speed of the auxiliary pump. In embodiments, the threshold allowable speed is less than the maximum allowable speed of the auxiliary pump. If the determined pump speed is greater than the threshold allowable speed of auxiliary pump32, then the engine stop-start event is not attainable and TCM22communicates with an engine stop-start (ES-S) arbitration controller, illustratively ECM16, to not allow the engine stop-start event to occur or to abort the engine stop-start event, as represented by block114. In embodiments, the ES-S arbitration controller is not ECM16, but rather a separate device which communicates with both ECM16and TCM22. If the determined pump speed is less than or equal to the threshold allowable speed of auxiliary pump32, then the engine stop-start event is attainable and processing sequence100continues with TCM22communicating with the ES-S arbitration controller, illustratively ECM16, to allow the engine stop-start event to occur, as represented by block112. Processing sequence100returns to block102and again determines the grade of vehicle10and subsequently determining a new pump speed based on the recalculated grade through steps104-110. The reason for the redetermination of the vehicle road grade is that the road grade determined by TCM22improves over time. Thus, the second and subsequent determinations of the vehicle road grade may result in a road grade value that requires a lower pump speed and thereby reduces the energy consumed by auxiliary pump32. After vehicle10comes to a stop, the determined vehicle grade may take some time to settle. To avoid engine shutdown delay and prevent change-of-mind restarts, processing sequence100may further include block103where a grade settling error may be added to the determined vehicle grade prior to determining the required pump speed. The magnitude of the grade settling error, and subsequently the resulting determined pump speed, may be reduced over time during a given stop event as vehicle grade confidence increases. The increased vehicle grade confidence and settled determined grade, and subsequently the improved determined pump speed, are incorporated into processing sequence100by the repeating of blocks102-110discussed above. With reference toFIGS.3and4, and with continued reference toFIG.1, another processing sequence200for controlling transmission18for engine stop-start events will now be described. Processing sequence200is configured to control transmission18for engine stop-start events by operating auxiliary pump32being operated at a determined pump speed based at least on a required pressure and system leakage if the engine stop-start event is determined to be attainable. Processing sequence200determines a required pressure for a specific engine stop-start event, which may include adjustments to incorporate possible measurement variations and/or errors, as represented by block220. The required pressure may depend on gross vehicle weight (GVM), vehicle grade, a ratio of engine rotational speed to vehicle speed (N/V), clutch coefficients, clutch return spring characteristics, logic valve spring pressure, and/or other clutch specifications. In various embodiments, the required pressure may be the pressure needed for gear and selective coupler arrangement34to provide vehicle hill hold while vehicle10is on a grade or the pressure needed for gear and selective coupler arrangement34to maintain logic valve/clutch return spring states. Processing sequence200continues by determining an overall system, transmission18, leakage at the required pressure using known hydraulic leakage parameters, as represented by block222. These known parameters may include temperature, clutch bleeds, pump leakages, controls leakages, and/or other known parameters. In various embodiments, the overall system leakage may be determined by a look-up table, while in other various embodiments, the overall system leakage may be determined via an algorithm. An exemplary curve223which may be used to generate a series of values for a look-up table is shown inFIG.4. Curve223provides a pump flow corrected for leakage. The flow is in gallons per minute (gpm) at various pressures provided in pounds per square inch (psi). Curve223is determined by subtracting from a no leakage curve225, the value of curves227,229, and231. Curve227represents a constant leakage design value of gear and selective coupler arrangement34that increase linearly with pressure. Curve229represents the leakage due to the valve bodies of gear and selective coupler arrangement34. Curve231represents additional leakage contributions of gear and selective coupler arrangement34such as pump leakages, controls leakage, clutch bleeds, and temperature effects. The overall system leakage may also or alternatively include adjustments to incorporate possible measurement variations and/or errors, which may also be known as design margins. Processing sequence200uses the overall system leakage to determine a pump flow at the required pressure, as represented by block224. A pump speed is determined based on the determined pump flow, as represented by block226. In embodiments, the pump speed is determined by using a lookup table of pump speed for various pump flows. The determined pump speed for auxiliary pump32is compared to a threshold allowable speed of auxiliary pump32, as represented by block228. If the determined pump speed is less than or equal to the threshold allowable speed of auxiliary pump32, then TCM22communicates with the ES-S arbitration controller, illustratively ECM16, to allow the engine stop-start event to occur, as represented by block230. If the determined pump speed is more than the threshold allowable speed of auxiliary pump32, TCM22communicates with the ES-S arbitration controller, illustratively ECM16, to not allow the engine stop-start event to occur, as represented by block232. Referring now toFIG.5, and with continued reference toFIG.1, yet another processing sequence300for controlling transmission18for engine stop-start events will now be described. Processing sequence300is an expansion on processing sequences100and200and is configured to control transmission18for engine stop-start events with auxiliary pump32being operated at a pump speed determined based on pressure and system leakage or both pressure and system leakage and the grade of vehicle10if the engine stop-start event is determined to be attainable. In processing sequence300, TCM22determines if operation of auxiliary pump32has been requested from the ES-S arbitration controller, illustratively ECM16, as represented by block340. A request to operate the auxiliary pump may be a request for an engine stop-start event. TCM22then determines if operation, an actuation, of a selective coupler, such as clutch35, within gear and selective coupler arrangement34has been requested, as represented by block344. A request to actuate clutch35may be a request for holding vehicle10stationary on a grade. If operation of clutch35is not requested, processing sequence300continues at block353and using the logic valve pressure of gear and selective coupler arrangement34as described in further detail below. The logic valve pressure is the pressure needed to maintain the current state of the gear and selective coupler arrangement34. If operation of clutch35is requested, processing sequence300continues with block346and determines various input variables which may include adjustments to incorporate possible measurement variations and/or errors, or design margins. Exemplary input variables may include gross vehicle weight (GVM), grade, a ratio of engine rotational speed to vehicle speed (N/V), clutch coefficients, clutch return springs, logic valve spring pressure, and/or other clutch specifications. Processing sequence300further includes block348in which a grade of vehicle10is determined by TCM22. Blocks346and348may occur simultaneously, or one before or after the other. With input variables and grade of vehicle10determined, processing sequence300continues at step350determining selective coupler pressures, such as clutch pressures, needed to hold the determined vehicle grade. The determined clutch pressure is compared to the logic valve pressure of gear and selective coupler arrangement34, as represented by block352. If the determined clutch pressure is less than or equal to the logic valve pressure, then the logic value pressure is used, as represented by block353. If the determined clutch pressure is greater than the logic valve pressure, then the determined clutch pressure is used, as represented by block355. Block353or block355represent the clutch pressure to be further evaluated. The temperature and leakage values are determined, for the clutch pressure to be further evaluated, as represented by block354. The temperature and leakage values determined at step354may include adjustments to incorporate possible measurement variations and/or errors, or design margins. The determined temperature and leakage values are then used to determine an overall system leakage at the clutch pressure to be further evaluated, as represented by block356. A pump flow is determined based on the overall system leakage, as represented by block358. Further, a pump speed for auxiliary pump32is determined based on the determined pump flow. The determined pump speed is then compared to a threshold allowable speed of auxiliary pump32, as represented by block360. If the determined pump speed is greater than the maximum allowable speed of auxiliary pump32, TCM22communicates with the ES-S arbitration controller, illustratively ECM16, to not allow or abort the engine stop-start event, as represented by block362. If the determined pump speed is less than or equal to the threshold allowable speed of auxiliary pump32, TCM22communicates with the ES-S arbitration controller, illustratively ECM16to allow or continue the engine stop-start event with auxiliary pump32operating at the determined pump speed, as represented by block364. If the engine stop-start event is allowed or continued, then processing sequence300continues by redetermining the grade of vehicle10at block348and running through the remainder of processing sequence300again to determine if the engine stop-start event should continue or be aborted based on a redetermined pump speed, where auxiliary pump32would operate at the redetermined pump speed if it is determined that engine stop-start event should be continue. An advantage, among others, of determining a speed for auxiliary pump32for each specific engine stop-start event allows for a more efficient operation of vehicle10by supplying an appropriate amount of flow and avoiding excess pump speed. This in turn reduces battery draw, pump stress, and manufacturing costs due to the elimination of a regulator valve and reduced filter area requirement, which results in increased utilization and a longer life for auxiliary pump32. While this invention has been described as having exemplary designs, the present invention can be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains and which fall within the limits of the appended claims. | 20,418 |
11858496 | DETAILED DESCRIPTION Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents. As used herein, the terms “includes” and “including” are intended to be inclusive in a manner similar to the term “comprising.” Similarly, the term “or” is generally intended to be inclusive (i.e., “A or B” is intended to mean “A or B or both”). Approximating language, as used herein throughout the specification and claims, is applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. For example, the approximating language may refer to being within a ten percent (10%) margin. FIG.1is a side, elevation view of a passenger vehicle100according to an example embodiment.FIG.2is a schematic view of a drivetrain system120of passenger vehicle100. As shown inFIG.1, passenger vehicle100is illustrated as a sedan. However, passenger vehicle100inFIG.1is provided as an example only. For instance, passenger vehicle100may be a coupe, a convertible, a truck, a van, a sports utility vehicle, etc. in alternative example embodiments. In addition, while described below in the context of passenger vehicle100, it will be understood that the present subject matter may be used in or with any other suitable vehicles, including commercial vehicles, such as tractor-trailers, busses, box trucks, farm vehicles, construction vehicles, etc., in other example embodiments. Passenger vehicle100may include a body110rolls on wheels116during driving of passenger vehicle100. Body110that defines an interior cabin112, and a driver and passengers may access interior cabin112via doors114and sit within interior cabin112on seats (not shown). Within body110, passenger vehicle100may also include various systems, including a motor system122, a transmission system124, an electrical accumulator/storage system126, etc., for operating passenger vehicle100. In general, motor system122, transmission system124, and electrical accumulator system126may be configured in any conventional manner. For example, motor system122may include prime movers, such as an electric machine system140and an internal combustion engine system142(FIG.2), that are operatable to propel passenger vehicle100. Thus, passenger vehicle100may be referred to as a hybrid vehicle. Motor system122may be disposed within body110and may be coupled to transmission system124. Transmission system124is disposed within power flow between motor system122and wheels116of passenger vehicle100. In certain example embodiments, a torque converter128may be disposed in the power flow between internal combustion engine system142and transmission system124within drivetrain system120. Transmission system124is operative to provide various speed and torque ratios between an input and output of the transmission system124. Thus, e.g., transmission system124may provide a mechanical advantage to assist propulsion of passenger vehicle100by motor system122. A differential129may be provided between transmission system124and wheels116to couple transmission system124and wheels116while also allowing relative rotation between wheels116on opposite sides of body110. Electric machine system140may be selectively operable as either a motor to propel passenger vehicle100or as a generator to provide electrical power, e.g., to electrical accumulator system126and other electrical consumers of passenger vehicle100. Thus, e.g., electric machine system140may operate as a motor in certain operating modes of passenger vehicle100, and electric machine system140may operate as generator in other operating modes of passenger vehicle100. Electric machine system140may disposed within drivetrain system120in various arrangements. For instance, electric machine system140may be provided as a module in the power flow path between internal combustion engine system142and transmission system124. As another example, electric machine system140may be integrated within transmission system124. Electrical accumulator system126may include one or more batteries, capacitors, etc. for storing electrical energy. Electric machine system140is coupled to electrical accumulator system126and may be selectively operable to charge electrical accumulator system126when operating as a generator and to draw electrical power from electrical accumulator system126to propel passenger vehicle100when operating as a motor. A braking system (not shown) is operable to decelerate passenger vehicle100. For instance, the braking system may include friction brakes configured to selectively reduce the rotational velocity of wheels116. The braking system may also be configured to as a regenerative braking system that converts kinetic energy of wheels116into electric current. Operation of motor system122, transmission system124, electrical accumulator system126, and the braking system are well known to those skilled in the art and not described in extensive detail herein for the sake of brevity. FIG.3is a schematic view of certain components of a control system130suitable for use with passenger vehicle100. In general, control system130is configured to control operation of passenger vehicle100and components therein. Control system130may facilitate operation of passenger vehicle100in various operating modes. For instance, control system130may be configured to operate passenger vehicle100in any one of a conventional mode, an electric mode, a hybrid mode, and a regeneration mode. In the conventional mode, passenger vehicle100is propelled only by internal combustion engine system142. Conversely, passenger vehicle100is propelled only by electrical machine system140in the electric mode. The conventional mode may provide passenger vehicle100with an extended operating range relative to the electric mode, and passenger vehicle100may be quickly refilled at a fueling station to allow continued operation of passenger vehicle100in the conventional mode. Conversely, the emissions of passenger vehicle100may be significantly reduced in the electric mode relative to the conventional mode, and a fuel efficiency of passenger vehicle100may increase significantly in the electric mode as compared to the conventional mode. In the hybrid mode, passenger vehicle100may be propelled by both electrical machine system140and internal combustion engine system142. In the regeneration mode, electrical machine system140may charge electrical accumulator system126, e.g., and internal combustion engine system142may propel passenger vehicle100. The various operating modes of passenger vehicle100are well known to those skilled in the art and not described in extensive detail herein for the sake of brevity. As shown inFIG.3, control system130includes one or more computing devices132with one or more processors134and one or more memory devices136(hereinafter referred to as “memories136”). In certain example embodiments, control system130may correspond to an electronic control unit (ECU) of passenger vehicle100. The one or more memories136stores information accessible by the one or more processors134, including instructions138that may be executed and data139usable by the one or more processors134. The one or more memories136may be of any type capable of storing information accessible by the one or more processors134, including a computing device-readable medium. The memory is a non-transitory medium, such as a hard-drive, memory card, optical disk, solid-state, tape memory, or the like. The one or more memories136may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media. The one or more processor134may be any conventional processors, such as commercially available CPUs. Alternatively, the one or more processors134may be a dedicated device, such as an ASIC or other hardware-based processor. Instructions138may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the one or more processors134. For example, the instructions138may be stored as computing device code on the computing device-readable medium of the one or more memories136. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. Instructions138may be stored in object code format for direct processing by the processor or in any other computing device language, including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Data139may be retrieved, stored, or modified by the one or more processors134in accordance with the instructions138. For instance, data139of the one or more memories136may store information from sensors of various systems of passenger vehicle100, including motor system122(e.g., electrical machine system140and internal combustion engine system142), transmission system124, electrical accumulator system126, etc. InFIG.3, the processor(s)134, memory(ies)136, and other elements of computing device(s)132are shown within the same block. However, computing device(s)132may actually include multiple processors, computing devices, and/or memories that may or may not be stored within a common physical housing. Similarly, the one or more memories136may be a hard drive or other storage media located in a housing different from that of the processor(s)134. Accordingly, computing device(s)132will be understood to include a collection of processor(s) and one or more memories that may or may not operate in parallel. Computing device(s)132may be configured for communicating with various components of passenger vehicle100. For example, computing device(s)132may be in operative communication with various systems of passenger vehicle100, including motor system122(e.g., electrical machine system140and internal combustion engine system142), transmission system124, electrical accumulator system126, etc. For instance, computing device(s)132may particularly be in operative communication with an engine control unit (ECU) (not shown) of motor system122and a transmission control unit (TCU) (not shown) of transmission system124. Computing device(s)132may also be in operative communication with other systems of passenger vehicle100, including a passenger/driver information system150, e.g., that includes one or mode display(s), speaker(s), gauge(s), etc. within interior cabin112for providing information regarding operation of passenger vehicle100to a passenger/driver, a cabin environment system152for modifying the temperature of interior cabin112, e.g., via air conditioning, heating, etc., a navigation system154for navigating passenger vehicle100to a destination, and/or a positioning system156for determining a current location (e.g., GPS coordinates) of passenger vehicle100. Computing device(s)132may be configured to control system(s)122,124,126based at least in part on inputs received from an operator via a user interface (not shown), which may include one or more of a steering wheel, a gas pedal, a clutch pedal, a brake pedal, turn signal lever, hazard light switch, and/or the like. Control system130may also include a wireless communication system160assists with wireless communication with other systems. For instance, wireless communication system160may wirelessly connect control system130with one or more other vehicles, buildings, etc. directly or via a communication network. Wireless communication system160may include an antenna and a chipset configured to communicate according to one or more wireless communication protocols, such as Bluetooth, communication protocols described in IEEE 802.11, GSM, CDMA, UMTS, EV-DO, WiMAX, LTE, Zigbee, dedicated short range communications (DSRC), radio frequency identification (RFID) communications, etc. It should be appreciated that the internal communication between the computing device(s)132and the system(s)122,124,126,140,142within passenger vehicle100may be wired and/or wireless. As a particular example, systems within passenger vehicle100may be connected and communicate via a CAN bus. As a hybrid vehicle, passenger vehicle100can operate with less emissions than a convention vehicle driven solely by an internal combustion engine. Passenger vehicle100can be propelled both by internal combustion engine system142as well as electric machine system140using electrical accumulator system126as an electrical power source. Power flow within passenger vehicle100may be selectively switchable between internal combustion engine system142and electric machine system140. For example, the driver of passenger vehicle100may choose between internal combustion engine system142and electric machine system140as the prime mover for passenger vehicle100and switch between the two power sources on demand. Increased usage of electric machine system140and decreasing usage of internal combustion engine system142can advantageously reduce carbon dioxide emissions in passenger vehicle100. However, drivers frequently miss opportunities to switch from internal combustion engine system142to electric machine system140and thus contribute to more environmentally friendly operation of passenger vehicle100. Certain aspects of the present subject matter encourage driving behavior that reduces operation of internal combustion engine system142and increases operation of electric machine system140in order to contribute to more environmentally friendly operation of passenger vehicle100. Referring now toFIG.4, a flow diagram of a method300for operating a hybrid vehicle is illustrated. Method300will generally be described with reference to passenger vehicle100described with reference toFIGS.1and2, and control system130described with reference toFIG.3. For instance, method300may be at least partially executed by computing device(s)132of control system130. However, method300may be suitable for use with any other suitable type of vehicle, control system configuration, and/or vehicle system. In addition, althoughFIG.4depict steps performed in a particular order for purposes of illustration and discussion, the methods and algorithms discussed herein are not limited to any particular order or arrangement. One skilled in the art, using the disclosures provided herein, will appreciate that various steps of the methods and algorithms disclosed herein can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. At310, method300includes obtaining propulsion switching data. The propulsion switching data may be descriptive of a state of one or more vehicle systems for switching power flow of internal combustion engine system142and electric machine system140. As an example, at310, control system130may receive propulsion switching data from various systems of passenger vehicle100, including internal combustion engine system142, electric machine system140, transmission system124, electrical accumulator system126, cabin environment system154, navigation system156, positioning system156, etc. via a CAN bus of passenger vehicle100. Thus, propulsion switching data may be obtained from any system of passenger vehicle100that contributes to switching propulsion of passenger vehicle100between internal combustion engine system142and electric machine system140. As a particular example, speed and/or temperature sensors within internal combustion engine system142, electric machine system140, and/or transmission system124may transmit propulsion switching data to control system130at310. As another example, temperature readings and/or charge state signals from electrical accumulator system126may be transmitted as propulsion switching data to control system130at310. As more examples, navigation system156may transmit a destination for passenger vehicle100to control system130as propulsion switching data at310, and positioning system156may transmit a current location of passenger vehicle100to control system130as propulsion switching data at310. Further, cabin environment system152may also transmit a current air conditioning and/or heater operating state to control system130as propulsion switching data at310. The various sensor data and/or operating states of the systems of passenger vehicle100may impact the selection of the power source of passenger vehicle100between internal combustion engine system142and electric machine system140. Thus, obtaining the propulsion switching data at310may assist with switching from internal combustion engine system142to electric machine system140, e.g., in order to reduce emissions and/or efficiently operate passenger vehicle100, as described in greater detail below. At320, method300includes comparing the propulsion switching data to model propulsion switching data. The model propulsion switching data may be descriptive of a model state of the one or more vehicle systems when switching power flow from internal combustion engine system142to electric machine system140is desirable. For instance, the model propulsion switching data may correspond to a conditions when switching power flow from internal combustion engine system142to electric machine system140reduces emissions and/or more efficiently operates passenger vehicle100. The model propulsion switching data may be calculated, gathered, or otherwise provided by a manufacturer of passenger vehicle100and/or drivetrain system120. At320, control system130may compare the propulsion switching data to the model propulsion switching data. For example, the model propulsion switching data may be saved within the one or more memories136of control system130, and control system130may retrieve the model propulsion switching data from the one or more memories136at320. As another example, the model propulsion switching data may be saved within a remote server, e.g., of the manufacturer of passenger vehicle100and/or drivetrain system120, and control system130may retrieve the model propulsion switching data from the remote server via wireless communication system160. In certain example embodiments, by comparing the propulsion switching data to the model propulsion switching data at320, control system130may establish whether the actual operating state of passenger vehicle100is optimal with respect to emissions and/or efficiency. Thus, e.g., control system130may determine that passenger vehicle100may be operated more efficiently by switching from internal combustion engine system142to electric machine system140for propulsion of passenger vehicle100based upon the difference between the propulsion switching data from310and the model propulsion switching data. In particular, when the propulsion switching data from310is different than the model propulsion switching data by less than a threshold, control system130may determine that passenger vehicle100is operating efficiently with internal combustion engine system142for propulsion of passenger vehicle100. Conversely, control system130may determine that passenger vehicle100may operate more efficiently by switching from internal combustion engine system142to electric machine system140for propulsion of passenger vehicle100when the propulsion switching data from310is different than the model propulsion switching data by more than the threshold. At330, method300includes determining a driver behavior recommendation to initiate a switch in the power flow from internal combustion engine system142to electric machine140. The driver of passenger vehicle100may take various actions to assist with switching from internal combustion engine system142to electric machine system140for propulsion of passenger vehicle100. For instance, the driver may decrease a speed of passenger vehicle100, may adjust cabin environment system152to decrease energy consumption of cabin environment system152, may plug passenger vehicle100into a charging station upon arrival at a destination to charge electrical accumulator system126, may schedule servicing of electrical accumulator system126, may command switching from internal combustion engine system142to electric machine system140, etc. Control system130may determine the driver behavior recommendation in order to increase the usage of electric machine system140and decrease the usage of internal combustion engine system142when the driver of passenger vehicle100implements the driver behavior recommendation. At340, method300includes presenting the driver behavior recommendation from330on a driver interface. For example, control system130may present the driver behavior recommendation on information system150. In particular, the driver behavior recommendation may be presented visually on a display of information system150, audibly on a speaker of information system150, and/or in any other suitable manner to inform the driver of passenger vehicle100of the driver behavior recommendation via information system150. As another example, control system130may transmit the driver behavior recommendation to a computing device, such as a smartphone or tablet, via wireless communication system160. For instance, a software application on the computing device of the driver may visually present the driver behavior recommendation on a display of the computing device, audibly present the driver behavior recommendation on a speaker of the computing device, etc. As noted above, the driver behavior recommendation can encourage the driver of passenger vehicle100to take actions which encourage switching from internal combustion engine system142to electric machine system140for propulsion of passenger vehicle100. Thus, based at least in part on the driver behavior recommendation from340, the driver of passenger vehicle100may adjust operation of passenger vehicle100to increase the usage of electric machine system140and decrease the usage of internal combustion engine system142. For instance, the driver may decrease the speed of passenger vehicle100, may adjust cabin environment system152to decrease energy consumption of cabin environment system152, may plug passenger vehicle100into a charging station, may schedule servicing of electrical accumulator system126, may command switching from internal combustion engine system142to electric machine system140, etc. in response to receipt of the driver behavior recommendation. In certain example embodiments, the driver behavior recommendation may be automatically implemented by control system130unless the driver opts out of the driver behavior recommendation. It will be understood that while described above in the context of a hybrid vehicle, certain aspects of the present subject matter may be used with conventional internal combustion powered vehicles to reduce emissions and save fuel. For instance, emission data from vehicle systems may be collected via a CAN bus and compared to model data. A driver behavior recommendation for reducing fuel consumption may be developed based upon the difference between the collected data and the model data, and the driver behavior recommendation may be presented to the driver encourage environmentally friendly driving. Referring now toFIG.5, a flow diagram of a method400for operating a hybrid vehicle is illustrated. Method400will generally be described with reference to passenger vehicle100described with reference toFIGS.1and2, and control system130described with reference toFIG.3. For instance, method400may be at least partially executed by computing device(s)132of control system130. However, method400may be suitable for use with any other suitable type of vehicle, control system configuration, and/or vehicle system. In addition, althoughFIG.5depict steps performed in a particular order for purposes of illustration and discussion, the methods and algorithms discussed herein are not limited to any particular order or arrangement. One skilled in the art, using the disclosures provided herein, will appreciate that various steps of the methods and algorithms disclosed herein can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. At410, method400includes operating passenger vehicle100on a roadway. Thus, e.g., a driver may operate passenger vehicle100such that internal combustion engine system142and/or electric machine system140propel passenger vehicle100along the roadway. Accordingly, passenger vehicle100may be operating to convey the driver, one or more passengers, and/or cargo to a destination at410. Passenger vehicle100may not be undergoing regulatory emissions testing within a controlled setting at410but rather may be operated in a normal, day-to-day manner. At420, method400includes obtaining propulsion switching data while passenger vehicle100is travelling on the roadway. The propulsion switching data is descriptive of a state of one or more vehicle systems for switching power flow of internal combustion engine system142and electric machine system140. As an example, at420, control system130may receive propulsion switching data from various systems of passenger vehicle100, including internal combustion engine system142, electric machine system140, transmission system124, electrical accumulator system126, navigation system156, positioning system156, etc. via a CAN bus of passenger vehicle100. Thus, propulsion switching data may be obtained from any system of passenger vehicle100that contributes to switching propulsion of passenger vehicle100between internal combustion engine system142and electric machine system140. At420, control system130may obtain the propulsion switching data over a regulatory operating interval, such as a predetermined distance. Thus, e.g., the period or interval over which propulsion switching data is obtained while passenger vehicle100is travelling on the roadway may correspond to a distance or time period defined by regulatory testing requirements, e.g., despite not operating passenger vehicle100under testing conditions bur rather in a normal, day-to-day manner. As a particular example, the propulsion switching data may include a start time of passenger vehicle100operating on the roadway at410, an end time of passenger vehicle operating on the roadway at410, a speed of passenger vehicle100while operating on the roadway at410, an average speed of passenger vehicle100while operating on the roadway at410, a total operating time of passenger vehicle100while operating on the roadway at410, an interval of uninterrupted operation (e.g., of internal combustion engine system142and/or electric machine system140) while operating on the roadway at410, and a switch time between internal combustion engine system142and the electric machine system140while operating on the roadway at410. The propulsion switching data may also include an average fuel consumption rate while operating on the roadway at410, a distance travelled by passenger vehicle while operating on the roadway at410, the operating status of cabin environment system152(such as a current air conditioning and/or heater operating state), an exterior temperature about passenger vehicle100, a charge status of electrical accumulator system126, etc. Thus, obtaining the propulsion switching data at420may assist with obtaining actual emissions and/or efficiency data for passenger vehicle100. At430, method400includes storing the propulsion switching data. For example, the propulsion switching data may be saved within the one or more memories136of control system130, and control system130may store the propulsion switching data within the one or more memories136at430. As another example, the propulsion switching data may be saved within a remote server, e.g., of a manufacturer of passenger vehicle100and/or drivetrain system120, and control system130may transmit the propulsion switching data to the remote server via wireless communication system160. At440, method400includes processing the propulsion switching data in order to determine an actual environmental impact of passenger vehicle100while passenger vehicle100operated on the roadway at410. The actual environmental impact of passenger vehicle100from440may be used to assist manufacturer compliance with emission regulations, such as annual fleet average fuel economy and emission regulations. When the actual emission and efficiency performance of passenger vehicle100exceeds the tested performance of passenger vehicle100, the actual environmental impact of passenger vehicle100from440may be used to evidence regulation compliance and/or for emissions credits. Moreover, a decrease in carbon dioxide emissions may be shown with the actual environmental impact of a fleet of passenger vehicles100. For example, the manufacturer of passenger vehicle100may appeal the tested emissions with actual emissions testing conducted via method400. Thus, method400may be implemented across fleet of vehicles to gather data for such fleet. Accordingly, the actual environmental impact for a plurality of vehicles may be accumulated using method400for each vehicle of the plurality of vehicles. This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. LIST OF REFERENCE CHARACTERS 100Passenger vehicle110Body112Interior cabin114Doors116Wheels120Drivetrain system122Motor system124Transmission system126Electrical accumulator/storage system128Torque converter129Differential130Control system132Computing devices134Processors136Memories138Instructions139Data140Electric machine system142Internal combustion engine system150Information system152Cabin environment system154Navigation system156Positioning system160Wireless communications system300Method400Method | 31,211 |
11858497 | The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed. DETAILED DESCRIPTION Drifting/sliding/slipping occurs when the tires of a vehicle lose traction with the road surface. As alluded to above, there is a limit to the amount of lateral frictional force a vehicle's tires can exert on a road surface. Past this peak force limit, the tires will saturate (i.e. lose traction with the road), and begin to drift/slip. Drifting has a reputation for being dangerous because both human drivers and autonomous control technologies have trouble managing the dual objectives of path tracking (i.e. controlling where the vehicle is going), and restoring stability (i.e. restoring traction between the tires and the road, and/or stopping the vehicle from spinning out of control) while drifting. For this reason, most automated/assisted driving technologies look primarily at the grip driving range and try to limit the vehicle's performance so that the tires will always be below their peak force capability. By limiting the vehicle's performance in this way, the vehicle will stay within the grip driving range and autonomous control algorithms that work in that range can be applied. However, in certain situations, a driver may want to drift. For example, many race car drivers intentionally cause a vehicle to drift in order to navigate sharp turns at peak efficiency. While drifting, the vehicle will often point in a different direction than it is moving. Put another way, the racer operates the vehicle at a high sideslip angle (i.e. the angle between the direction the vehicle is pointing, and the vehicle's linear velocity vector). In this way, the racer is able to navigate the turn faster than would be possible in the grip driving range. Racers are able to control/manipulate the drift condition of the vehicle in a number of ways. For example, racers may use a counter-steering technique (i.e. rotating the steering wheel counter to the desired direction of a turn, for example, steering left to turn right), in order to increase the aggressiveness of a drift. Racers may also perform clutch kicks (i.e. a depression and sudden release of the clutch pedal which jolts the driveline of the vehicle) in order to increase the aggressiveness of mild drifts, and/or initiate drifts. More generally, racers are able to manipulate the throttle, steering, brakes, and clutch of a vehicle in order to control it during a drift. Outside the professional racing context, drift driving/learning to drift drive can be a fun activity which enhances the driver experience. However, as alluded to above, controlling a drifting vehicle can be difficult for inexperienced drivers. Initiating a drift can be even more difficult (and dangerous). For this reason, an automated/assisted driving system which facilitates an interactive drift driving experience for the driver, while maintaining a safe/stable drift, is highly desirable. Accordingly, embodiments of the technology disclosed herein are directed towards systems and methods of using an autonomous/assisted driving system to maintain a stable drift (i.e. the range of operating conditions where a vehicle is both drifting, and controllable) while providing an interactive drift driving experience for the driver. In a first set of embodiments, a driver is allowed take manual control of the vehicle once a stable drift has been initiated. The driver is able to use steering, throttle, clutch, and brakes in order to control the vehicle in the drift. In these embodiments, the autonomous/assisted driving system merely provides corrective assistance. In some embodiments, the autonomous/assisted driving system may provide corrective assistance when the driver attempts to enter an unstable drift condition. Put another way, in response to manual control operations performed by the driver, the automated/assisted driving system will provide corrective assistance in order to prevent the vehicle from entering an unstable drift. In a second set of embodiments, an autonomous driving system maintains control of the vehicle throughout the drift. However, a driver of the vehicle is provided with multiple vehicle interfaces on which to perform “simulated” drift maneuvers, which may communicate the driver's desire to (1) enter a more aggressive drift; (2) enter a less aggressive drift; or (3) exit the drift. As will be described in greater detail below, the autonomous driving system interprets these simulated drift maneuvers as a request to enter a desired drift condition. The autonomous driving system then controls the vehicle in order to achieve the desired drift condition (or a stable approximation of the desired drift condition). Importantly, in all of these embodiments, the autonomous/assisted driving system utilizes the full actuation capability of the vehicle (i.e. throttle, steering, brakes, and clutch) in order to control the vehicle in a drift. As will be discussed in greater detail below, by utilizing the full actuation capability of the car, these embodiments are able to achieve a broader range of drifting/driving effects than would be possible using a smaller number of actuators (e.g. throttle and steering). The systems and methods disclosed herein may be implemented with any of a number of different vehicles and vehicle types. For example, the systems and methods disclosed herein may be used with automobiles, trucks, motorcycles, recreational vehicles and other like on-or off-road vehicles. In addition, the principals disclosed herein may also extend to other vehicle types as well. An example hybrid electric vehicle (HEV) in which embodiments of the disclosed technology may be implemented is illustrated inFIG.1A. Although the example described with reference toFIG.1Ais a hybrid type of vehicle, the systems and methods for facilitating an interactive drift driving experience for the driver while maintaining a safe/stable drift, can be implemented in other types of vehicle including gasoline- or diesel-powered vehicles, fuel-cell vehicles, electric vehicles, or other vehicles. FIG.1Aillustrates a drive system of a vehicle10that may include an internal combustion engine14and one or more electric motors22(which may also serve as generators) as sources of motive power. Driving force generated by the internal combustion engine14and motors22can be transmitted to one or more wheels34via a torque converter16and/or clutch15, a transmission18, a differential gear device28, and a pair of axles30. As an HEV, vehicle10may be driven/powered with either or both of internal combustion engine14and the motor(s)22as the drive source for travel. For example, a first travel mode may be an engine-only travel mode that only uses internal combustion engine14as the source of motive power. A second travel mode may be an EV travel mode that only uses the motor(s)22as the source of motive power. A third travel mode may be an HEV travel mode that uses internal combustion engine14and the motor(s)22as the sources of motive power. In the engine-only and HEV travel modes, vehicle10relies on the motive force generated at least by internal combustion engine14, and a clutch15may be included to engage internal combustion engine14. In the EV travel mode, vehicle10is powered by the motive force generated by motor22while internal combustion engine14may be stopped and clutch15disengaged. Internal combustion engine14can be an internal combustion engine such as a gasoline, diesel or similarly powered engine in which fuel is injected into and combusted in a combustion chamber. A cooling system12can be provided to cool the internal combustion engine14such as, for example, by removing excess heat from internal combustion engine14. For example, cooling system12can be implemented to include a radiator, a water pump and a series of cooling channels. In operation, the water pump circulates coolant through the internal combustion engine14to absorb excess heat from the engine. The heated coolant is circulated through the radiator to remove heat from the coolant, and the cold coolant can then be recirculated through the engine. A fan may also be included to increase the cooling capacity of the radiator. The water pump, and in some instances the fan, may operate via a direct or indirect coupling to the driveshaft of internal combustion engine14. In other applications, either or both the water pump and the fan may be operated by electric current such as from battery44. An output control circuit14A may be provided to control drive (output torque) of internal combustion engine14. Output control circuit14A may include a throttle actuator to control an electronic throttle valve that controls fuel injection, an ignition device that controls ignition timing, and the like. Output control circuit14A may execute output control of internal combustion engine14according to a command control signal(s) supplied from an electronic control unit50, described below. Such output control can include, for example, throttle control, fuel injection control, and ignition timing control. Throttle commands from a driver of vehicle10may be communicated by wire to electronic control unit50via an accelerator pedal position sensor attached to the accelerator pedal (not pictured). The accelerator pedal position sensor may be one of sensors52, described below. Motor22can also be used to provide motive power in vehicle10and is powered electrically via a battery44. Battery44may be implemented as one or more batteries or other power storage devices including, for example, lead-acid batteries, lithium ion batteries, capacitive storage devices, and so on. Battery44may be charged by a battery charger45that receives energy from internal combustion engine14. For example, an alternator or generator may be coupled directly or indirectly to a drive shaft of internal combustion engine14to generate an electrical current as a result of the operation of internal combustion engine14. A clutch can be included to engage/disengage the battery charger45. Battery44may also be charged by motor22such as, for example, by regenerative braking or by coasting during which time motor22operate as generator. Motor22can be powered by battery44to generate a motive force to move the vehicle and adjust vehicle speed. Motor22can also function as a generator to generate electrical power such as, for example, when coasting or braking. Battery44may also be used to power other electrical or electronic systems in the vehicle. Motor22may be connected to battery44via an inverter42. Battery44can include, for example, one or more batteries, capacitive storage units, or other storage reservoirs suitable for storing electrical energy that can be used to power motor22. When battery44is implemented using one or more batteries, the batteries can include, for example, nickel metal hydride batteries, lithium ion batteries, lead acid batteries, nickel cadmium batteries, lithium ion polymer batteries, and other types of batteries. An electronic control unit50(described below) may be included and may control the electric drive components of the vehicle as well as other vehicle components. For example, electronic control unit50may control inverter42, adjust driving current supplied to motor22, and adjust the current received from motor22during regenerative coasting and breaking. As a more particular example, output torque of the motor22can be increased or decreased by electronic control unit50through the inverter42. A torque converter16can be included to control the application of power from internal combustion engine14and motor22to transmission18. Torque converter16can include a viscous fluid coupling that transfers rotational power from the motive power source to the driveshaft via the transmission. Torque converter16can include a conventional torque converter or a lockup torque converter. In other embodiments, a mechanical clutch can be used in place of torque converter16. Clutch15can be included to engage and disengage internal combustion engine14from the drivetrain of the vehicle. In the illustrated example, a crankshaft32, which is an output member of internal combustion engine14, may be selectively coupled to the motor22and torque converter16via clutch15. Clutch15can be implemented as, for example, a multiple disc type hydraulic frictional engagement device whose engagement is controlled by an actuator such as a hydraulic actuator. Clutch15may be controlled using a clutch-by-wire system. In this system, the engagement of the clutch may be controlled by a clutch actuator (not pictured). Electronic control unit50may control the clutch actuator. Clutch commands may be communicated from the driver of vehicle10to electronic control unit50via a clutch pedal position sensor positioned on the clutch pedal of vehicle10. In some embodiments, this sensor may be one of sensors52. Clutch15may be controlled such that its engagement state is complete engagement, slip engagement, and complete disengagement complete disengagement, depending on the pressure applied to the clutch. For example, a torque capacity of clutch15may be controlled according to the hydraulic pressure supplied from a hydraulic control circuit (not illustrated). When clutch15is engaged, power transmission is provided in the power transmission path between the crankshaft32and torque converter16. On the other hand, when clutch15is disengaged, motive power from internal combustion engine14is not delivered to the torque converter16. In a slip engagement state, clutch15is engaged, and motive power is provided to torque converter16according to a torque capacity (transmission torque) of the clutch15. Vehicle10may further include a brake-by-wire system (not pictured). In this system, a brake actuator may control the application of brakes to wheels34. Electronic control unit50may control the brake actuator. Braking commands may be communicated from the driver of vehicle10to electronic control unit50via a brake pedal position sensor positioned on the brake pedal of vehicle10. In some embodiments, vehicle10may also include a hand brake/parking brake which is connected by wire to electronic control unit50in a similar fashion. Vehicle10may also include a steering-by-wire system (not pictured). In this system, a steering actuator may control the direction of wheels34. Electronic control unit may control the steering actuator. Steering commands may be communicated from the driver of vehicle10to electronic control unit10via a steering angle sensor positioned on the steering wheel of vehicle10. As alluded to above, vehicle10may include an electronic control unit50. Electronic control unit50may include circuitry to control various aspects of the vehicle operation. Electronic control unit50may include, for example, a microcomputer that includes a one or more processing units (e.g., microprocessors), memory storage (e.g., RAM, ROM, etc.), and I/O devices. The processing units of electronic control unit50, execute instructions stored in memory to control one or more electrical systems or subsystems in the vehicle. As will be discussed in greater detail below, electric control unit50may be used to control vehicle10in order to maintain a stable drift, and to effectuate a desired drift condition communicated by a driver of vehicle10. Electronic control unit50can include a plurality of electronic control units such as, for example, an electronic engine control module, a powertrain control module, a transmission control module, a steering control module, a clutch control module, a suspension control module, a body control module, and so on. As a further example, electronic control units can be included to control systems and functions such as doors and door locking, lighting, human-machine interfaces, cruise control, telematics, braking systems (e.g., ABS or ESC), battery management systems, and so on. These various control units can be implemented using two or more separate electronic control units, or using a single electronic control unit. In the example illustrated inFIG.1A, electronic control unit50receives information from a plurality of sensors included in vehicle10. For example, electronic control unit50may receive signals that indicate vehicle operating conditions or characteristics, or signals that can be used to derive vehicle operating conditions or characteristics. These may include, but are not limited to throttle/accelerator operation amount, ACC, a steering angle, SA, of the steering wheel, yaw rate of the vehicle, Y (e.g. the angular velocity of the vehicle around its yaw axis), sideslip angle of the vehicle, SSA(e.g. the angle between the direction the vehicle is pointing and the vehicle's linear velocity vector), and vehicle speed, NV. These may also include torque converter16output, NT(e.g., output amps indicative of motor output), brake operation amount/pressure, B, and clutch operation amount/pressure, C. Accordingly, vehicle10can include a plurality of sensors52that can be used to detect various conditions internal or external to the vehicle and provide sensed conditions to engine control unit50(which, again, may be implemented as one or a plurality of individual control circuits). In some embodiments, one or more of the sensors52may include their own processing capability to compute the results for additional information that can be provided to electronic control unit50. In other embodiments, one or more sensors may be data-gathering-only sensors that provide only raw data to electronic control unit50. In further embodiments, hybrid sensors may be included that provide a combination of raw data and processed data to electronic control unit50. For example, while vehicle10is drifting, a clutch pedal position sensor may be able to interpret a quick depression and release of the clutch pedal (i.e. an attempt at a “clutch kick”) as an indication that the driver of vehicle10wants to increase the aggressiveness of the drift. Sensors52may provide an analog output or a digital output. Sensors52may be included to detect not only vehicle conditions but also to detect external conditions as well. Sensors that might be used to detect external conditions can include, for example, sonar, radar, lidar or other vehicle proximity sensors, and cameras or other image sensors. Image sensors can be used to detect, for example, traffic signs indicating a current speed limit, road curvature, obstacles, and so on. Still other sensors may include those that can detect road grade. While some sensors can be used to actively detect passive environmental objects, other sensors can be included and used to detect active objects such as those objects used to implement smart roadways that may actively transmit and/or receive data or other information. FIG.1Billustrates an example architecture for system that can facilitate an interactive drift driving experience for a driver of a vehicle, while maintaining a safe/stable drift, in accordance with one embodiment of the systems and methods described herein. Referring now toFIG.1B, in this example, drift control system200includes drift control circuit210, a plurality of sensors152, and a plurality of vehicle systems158. Sensors152and vehicle systems158can communicate with drift control circuit210via a wired or wireless communication interface. Although sensors152and vehicle systems158are depicted as communicating with drift control circuit210, they can also communicate with each other as well as with other vehicle systems. Drift control circuit210can be implemented as an ECU or as part of an ECU such as, for example electronic control unit50. In other embodiments, drift control circuit210can be implemented independently of the ECU. Drift control circuit210in this example includes a communication circuit201, a decision circuit203(including a processor206and memory208in this example) and a power supply212. Components of drift control circuit210are illustrated as communicating with each other via a data bus, although other communication in interfaces can be included. Drift control circuit210in this example also includes a manual assist switch205that can be operated by the user to manually select the assist mode. Processor206can include a GPU, CPU, microprocessor, or any other suitable processing system. The memory208may include one or more various forms of memory or data storage (e.g., flash, RAM, etc.) that may be used to store the calibration parameters, images (analysis or historic), point parameters, instructions and variables for processor206as well as any other suitable information. As will be described in greater detail below, memory208may also store a regions of a vehicle state space (i.e. a grip driving region, a stable drift region, and an unstable drift region). Memory208, can be made up of one or more modules of one or more different types of memory, and may be configured to store data and other information as well as operational instructions that may be used by the processor206to drift control circuit210. Although the example ofFIG.1Bis illustrated using processor and memory circuitry, as described below with reference to circuits disclosed herein, decision circuit203can be implemented utilizing any form of circuitry including, for example, hardware, software, or a combination thereof. By way of further example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a drift control circuit210. Communication circuit201may include either or both a wireless transceiver circuit202with an associated antenna214and a wired I/O interface204with an associated hardwired data port (not illustrated). As this example illustrates, communications with drift control circuit210can include either or both wired and wireless communications circuits201. Wireless transceiver circuit202can include a transmitter and a receiver (not shown) to allow wireless communications via any of a number of communication protocols such as, for example, WiFi, Bluetooth, near field communications (NFC), Zigbee, and any of a number of other wireless communication protocols whether standardized, proprietary, open, point-to-point, networked or otherwise. Antenna214is coupled to wireless transceiver circuit202and is used by wireless transceiver circuit202to transmit radio signals wirelessly to wireless equipment with which it is connected and to receive radio signals as well. These RF signals can include information of almost any sort that is sent or received by drift control circuit210to/from other entities such as sensors152and vehicle systems158. Wired I/O interface204can include a transmitter and a receiver (not shown) for hardwired communications with other devices. For example, wired I/O interface204can provide a hardwired interface to other components, including sensors152and vehicle systems158. Wired I/O interface204can communicate with other devices using Ethernet or any of a number of other wired communication protocols whether standardized, proprietary, open, point-to-point, networked or otherwise. Power supply210can include one or more of a battery or batteries (such as, e.g., Li-ion, Li-Polymer, NiMH, NiCd, NiZn, and NiH2, to name a few, whether rechargeable or primary batteries), a power connector (e.g., to connect to vehicle supplied power, etc.), an energy harvester (e.g., solar cells, piezoelectric system, etc.), or it can include any other suitable power supply. Sensors152can include, for example, sensors52such as those described above with reference to the example ofFIG.1. Sensors52can include additional sensors that may or not otherwise be included on a standard vehicle10with which drift control system200is implemented. In the illustrated example, sensors152include vehicle acceleration sensors212, vehicle speed sensors214, wheelspin sensors216(e.g., one for each wheel), a tire pressure monitoring system (TPMS)220, accelerometers such as a 3-axis accelerometer222to detect roll, pitch and yaw of the vehicle, vehicle clearance sensors224, left-right and front-rear slip ratio sensors226, and environmental sensors228(e.g., to detect salinity or other environmental conditions). Additional sensors232can also be included as may be appropriate for a given implementation of autonomous control system200. For example, additional sensors232may include sensors for throttle engagement (i.e. accelerator pedal position), brake engagement, clutch engagement, and steering wheel position. There may also be additional sensors for detecting and/or computing sideslip velocities, sideslip angles, percent sideslip, frictional forces, degree of steer, heading, trajectory, front slip angle corresponding to full tire saturation, rear slip angle corresponding to full tire saturation, maximum stable steering angle given speed/friction, gravitational constant, coefficient of friction between vehicle10tires and roadway, distance from center of gravity of vehicle10to front axle, distance from center of gravity of vehicle10to rear axle, total mass of vehicle10, total longitudinal force, rear longitudinal force, front longitudinal force, total lateral force, rear lateral force, front lateral force, longitudinal speed, lateral speed, longitudinal acceleration, time derivatives of steering wheel position, time derivatives of throttle, gear, exhaust, revolutions per minutes, mileage, emissions, and/or other operational parameters of vehicle10. Vehicle systems158can include any of a number of different vehicle components or subsystems used to control or monitor various aspects of the vehicle and its performance. In this example, the vehicle systems158include a GPS or other vehicle positioning system272; torque splitters274that can control distribution of power among the vehicle wheels such as, for example, by controlling front/rear and left/right torque split; engine control circuits276to control the operation of engine (e.g. internal combustion engine14); steering systems278to turn the wheels of vehicle10; clutch system280; and other vehicle systems282, such as, for example, an adjustable-height air suspension system. During operation, drift control circuit210can receive information from various vehicle sensors which may be used to interpret a desired drift condition communicated by a driver of vehicle10. Communication circuit201can be used to transmit and receive information between drift control circuit210and sensors152, and drift control circuit210and vehicle systems158. Also, sensors152may communicate with vehicle systems158directly or indirectly (e.g., via communication circuit201or otherwise). In various embodiments, communication circuit201can be configured to receive data and other information from sensors152used to interpret a desired drift condition communicated by a driver of vehicle10. Additionally, communication circuit201can be used to send an activation signal or other activation information to various vehicle systems158as part of effectuating the desired drift condition. For example, as described in more detail below, communication circuit201can be used to send signals to, for example, one or more of: torque splitters274to control front/rear torque split and left/right torque split; ICE control circuit276to, for example, control motor torque, motor speed of the various motors in the system; steering system278to, for example, increase the slip angle of the tires; and clutch system280to, for example, approximate a “clutch kick” in order to increase the aggressiveness of a drift. The decision regarding what action to take via these various vehicle systems158can be made based on the information detected by sensors152. Examples of this are described in more detail below. FIG.2is a graph illustrating an example relationship between the lateral force (or cornering force) experienced by a vehicle's rear tires during driving, and the slip angle of the tires. Similar to the sideslip angle of a vehicle, the slip angle of a tire is the angle between the tire's linear velocity vector and the direction in which the tire is pointing. As the graph inFIG.2illustrates, within a certain range of slip angles (i.e. between −αslideand αslide), as the magnitude of slip angle increases, the magnitude of lateral frictional force between the tires and the road increases as well. For example, as a vehicle negotiates a tight turn, the slip angle of its tires will increase, as will the lateral frictional force between its tires and the road surface. Of note, asFIG.2illustrates, there is a peak lateral force that a vehicle's tires can achieve before they begin to “slip.” This is known as the tire's saturation point. Put another way, at α=|αslide|, the cornering/lateral force saturates, and the tires lose traction with the road surface. For this reason, drift driving is often associated with high tire slip angles (as well as vehicle high sideslip angles), and cornering forces which exceed a vehicle's tire's peak lateral force limit. FIG.3is a diagram illustrating sideslip angle and yaw rate for an example vehicle body. As discussed earlier, the sideslip angle of a vehicle, β, is the angle between the direction the vehicle is pointing, and the vehicle's linear velocity vector. The yaw rate of a vehicle, r, is the angular velocity of the vehicle about its yaw axis. FIG.4is a graph illustrating a vehicle state space and an example vehicle state. Vehicle state space400may be a graph or matrix which represents operation states of the vehicle. In the illustrated example, vehicle state space400is two-dimensional: the y-axis plots yaw rate (rad/s), and the x-axis plots sideslip angle (rad). However, in other embodiments the vehicle state space may be n-dimensional, where each dimension represents a different operational parameter for the vehicle (e.g. vehicle speed, vehicle acceleration, yaw rate, sideslip velocities, sideslip angles, percent sideslip, frictional forces, degree of steer, heading, trajectory, front slip angle corresponding to full tire saturation, rear slip angle corresponding to full tire saturation, maximum stable steering angle given speed/friction, gravitational constant, coefficient of friction between tires and roadway, distance from center of gravity of the vehicle to its front axle, distance from center of gravity of the vehicle to its rear axle, total mass of the vehicle, total longitudinal force, rear longitudinal force, front longitudinal force, total lateral force, rear lateral force, front lateral force, longitudinal speed, lateral speed, longitudinal acceleration, steering angle, throttle engagement, brake engagement, clutch engagement, mileage, emissions, and/or other operational parameters the vehicle). Grip driving region410represents the region in vehicle state space400where a vehicle operates in the grip driving range. As alluded to earlier, the grip driving range is the set of operating conditions where the vehicle's tires maintain traction with the road's surface. In the illustrated example, grip driving region410is a two-dimensional area, with sideslip angle and yaw rate as parameters. However, like vehicle state space400, grip driving region410may be n dimensional. In some embodiments, grip driving region410may be derived using expert driving data. In other embodiments, grip driving region410may be derived from learned information. In some embodiments, grip driving region410may be stored in memory208. Stable drift region420represents the region in vehicle state space400where a vehicle is in a controllable drift (e.g. controllable by the automatic braking, vehicle stability control, and traction control systems of a vehicle). In the illustrated example, stable drift region420is a two-dimensional area, with parameters for sideslip angle and yaw rate. However, like vehicle state space400and grip driving region410, stable drift region420may be n dimensional. In some embodiments, stable drift region420may be derived using expert driving data. In other embodiments, stable drift region420may be derived from learned information. In some embodiments, stable drift region420may be stored in memory208. Unstable drift region430represents the region in vehicle state space400where a vehicle is in an uncontrollable drift. In the illustrated example, unstable drift region430is a two-dimensional area, with parameters for sideslip angle and yaw rate. However, unstable drift region430may be n dimensional. In some embodiments, unstable drift region430may be derived using expert driving data. In other embodiments, unstable drift region430may be derived from learned information. In some embodiments, unstable drift region430may be stored in memory208. Vehicle state440is a data set associated with the contemporaneous operation of a vehicle. In the illustrated example, vehicle state440is comprised of two parameters (i.e. sideslip angle and yaw rate). However, in other embodiments vehicle state440may be comprised of n parameters, including all of the parameters described above for vehicle state space400. In some embodiments, drift control circuit210may obtain vehicle state440(i.e. vehicle state data) from sensors152and GPS/VEH Position System272. In the illustrated example, vehicle state440lies within stable drift region420. Put another way, the vehicle associated with vehicle state440is operating in a stable (i.e. controllable) drift. However, in other examples vehicle state440may lie within grip driving region410, or unstable drift region430. Vehicle state440may also lie at the boundary between regions. For example, if vehicle state440were to lie at the boundary of grip driving region410, and stable drift driving region420, it would mean that the vehicle is at the transition between grip driving, and drift driving. FIG.5is a flowchart illustrating example operations that can be performed to provide corrective assistance to a driver who is manually controlling a vehicle during a drift. At operation500, a stable drift is initiated. In some embodiments the stable drift may be initiated by a driver who is manually controlling the vehicle. In other embodiments, the stable drift may be initiated by an autonomous/assisted driving system (e.g. a driver may press a “drift initiation button” which instructs an autonomous/assisted driving system to initiate a stable drift). In these embodiments, the autonomous/assisted driving system may utilize a closed-loop control system in order to initiate the drift. A closed-loop control system (a.k.a. a feedback control system) is a type of control system in which the controlling action depends, in part, on the generated output of the system. More specifically, in a closed-loop control system, part of the generated output, (i.e. the feedback signal; which may be the output signal itself, or some function of the output), is returned to the reference input via a feedback loop. In this way, the generated output of the system is compared to the desired output/reference input. For example, a closed-loop system may generate an error signal, which is the difference between the reference input signal and the feedback signal. This error signal is fed to the system controller, which converts it into a control signal designed to reduce the error, thus driving the generated output of the system towards the desired output. In some embodiments, a closed-loop, autonomous/assisted driving system may use control laws involving sideslip, wheel speed, yaw rate, and other vehicle operation states in order to keep the vehicle controllable while initiating the drift. In certain embodiments, the system may use non-linear control theory. In other embodiments, the system may use Model Predictive Control (MPC). In some embodiments, the system may be implemented using a closed-loop controller, such as drift control circuit210. In some embodiments, drift control circuit210may send control signals to two or more actuators of vehicle10in order to initiate the stable drift. For example, drift control circuit210may send control signals to the throttle, steering, brake and clutch actuators of vehicle10in order to initiate the stable drift. After a stable drift has been initiated, at operation502, a driver of the vehicle may take manual control of the vehicle. In some embodiments where the stable drift has been initiated by an autonomous/assisted driving system, the autonomous/assisted driving system may maintain the stable drift for a short period of time before the driver is allowed to take full manual control. Specifically, in these embodiments, the driver may only take manual control of the vehicle if the driver's control inputs are within a certain error tolerance of the autonomous/assisted controls used to maintain the vehicle in a stable drift. For example, one or more interfaces of the vehicle may provide the driver with audiovisual or haptic feedback instructing the driver to increase/decrease throttle, or increase/decrease counter-steering, etc., in order to better match the autonomous/assisted controls used to maintain the vehicle in a stable drift. Accordingly, once the driver's control inputs match the automated/assisted controls within a certain error tolerance, the driver may be allowed to take full manual control of the vehicle. In some embodiments, this may be accomplished by pressing a button (e.g. a “manual drift control button”). Once the driver has taken manual control of the vehicle, the driver may use various vehicle interfaces to perform manual control operations. Vehicle interfaces may include the steering wheel, the accelerator pedal, the brake pedal, a hand brake, a clutch pedal, the gear shift lever of a manual transmission vehicle, or any other interface which allows a driver to interact with the motive systems of the vehicle. A manual control operation is a manipulation of a vehicle interface by a driver, while the driver has manual control of the vehicle, which controls a motive system of the vehicle. Examples of manual control operations include a driver pressing on the accelerator pedal to control throttle, and a driver rotating the steering wheel to control steering. While the driver has manual control of the vehicle during the drift, at operation504, a determination to provide corrective assistance is made. In some embodiments, an autonomous/assisted driving system (as described above) may make this determination. In certain embodiments, the autonomous/assisted driving system may determine that corrective assistance is required when the driver attempts to operate the vehicle into an unstable drift. Put another way, the autonomous/assisted driving system may determine to provide corrective assistance in response to manual control operations which would bring the vehicle into an unstable drift. This determination may be made by correlating the vehicle state (e.g. vehicle state440) to a vehicle state space (e.g. vehicle state space400). For example, the autonomous/assisted driving system may correlate the vehicle state to the vehicle state space and learn that the vehicle is operating at or near the boundary between the stable drift region (e.g. stable drift region420), and the unstable drift region (e.g. unstable drift region430). If the automated/assisted driving system then detects increased pressure on the accelerator pedal, it may determine that the driver is attempting to operate the vehicle into an unstable drift. In some embodiments, the autonomous/assisted driving system may determine to provide corrective assistance based on factors other than drift stability. For example, the autonomous/assisted driving system may determine corrective assistance is required to meet certain safety objectives, such as collision avoidance and staying on the road, regardless of drift stability. In some embodiments, safety objectives may be determined by inputting vehicle state data and environmental situation data into pre-selected safety protocols (these protocols may be set by the driver, the autonomous/assisted driving system, or the vehicle manufacturer). Similar to vehicle state data, environmental situation data is data associated with the contextual environment in which the vehicle operates. Environmental situation data may include information related to road features (e.g. road path, coefficient of friction between the road and tires, bank angle, number of lanes, etc.), moving and/or stationary objects in proximity to the vehicle's predicted trajectory (e.g. other vehicles, fallen trees, deer, etc.), and ambient weather conditions. In some embodiments, drift control circuit210may obtain environmental situation data from sensors152and GPS/VEH Position System272. As described above, the autonomous/assisted driving system may determine that corrective assistance is required in order to meet certain safety objectives, such as collision avoidance and staying on the road. Further, these safety objectives may be determined by inputting vehicle state data and environmental situation data into pre-selected safety protocols. As an example, the autonomous driving system may determine that corrective assistance is required to meet an objective of staying on the road, even when the driver is controlling the vehicle in a stable drift. In this example, the autonomous system will use vehicle state data (i.e. vehicle speed, vehicle location, etc.) and environmental situation data (i.e. road path, coefficient of friction between the road and tires, etc.) to make this determination. At operation506, corrective assistance is provided based on the determination in the previous operation. As described above, an autonomous/assisted driving system may provide corrective assistance by sending control signals to multiple actuators of the vehicle. For example, the autonomous/assisted driving system may send control signals to the throttle actuator (i.e. reduce throttle), or the brake actuator (i.e. increase braking), in order to keep the vehicle in a stable drift. More generally, the autonomous/assisted driving system may send control signals to any combination of the throttle, steering, brake, and clutch actuators of a vehicle (e.g. vehicle10), in order to keep the vehicle within a stable drift. FIG.6is a flowchart illustrating example operations that can be performed to autonomously control a vehicle throughout a drift, while also responding to simulated drift maneuvers performed by a driver of the vehicle. In the embodiments illustrated byFIG.6, it should be understood that performance of a simulated drift maneuver is decoupled from direct manual control of the vehicle (corrected or otherwise). As will be described in greater detail below, a simulated drift maneuver may communicate the driver's desire to enter a particular drift condition to the autonomous driving system, however it is the autonomous driving system, not the driver, which controls the vehicle during the drift. Thus, the embodiments illustrated byFIG.6are different than those illustrated inFIG.5, where a driver has direct manual control of the vehicle during the drift, and an autonomous/assisted driving system merely provides corrective assistance. At operation600, a stable drift is initiated. The stable drift may be initiated in the same/similar way as described in conjunction with operation500ofFIG.5. Once a stable drift has been initiated, at operation602, an autonomous driving system takes control of the vehicle during the drift. In some embodiments, this may involve a driver selecting an automated drift driving mode (e.g. by pushing an automated drift driving button). In other embodiments, the autonomous driving system may have default control of the vehicle after the stable drift has been initiated (i.e. no driver intervention is required to cede control to the autonomous driving system). As alluded to above, the autonomous driving system may use a closed-loop system to control the vehicle during the drift. The closed-loop system may use control laws involving sideslip, wheel speed, yaw rate, and other vehicle operation states in order to keep the vehicle controllable during the drift. In certain embodiments, the closed-loop system may use non-linear control theory. In other embodiments, the closed-loop controller may use Model Predictive Control (MPC). In some embodiments, the closed-loop system may use a closed-loop controller, e.g. drift control circuit210, to send control signals to two or more actuators of the vehicle in order to control the vehicle during the drift. For example, drift control circuit210may send control signals to the throttle, steering, brake and clutch actuators of vehicle10in order to control the vehicle during the drift. As alluded to above, the autonomous driving system will control the vehicle in a manner that prevents the vehicle from entering an unstable drift. In some embodiments, the autonomous driving system may utilize an understanding of a vehicle state space in order to accomplish this. For example, the autonomous driving system may correlate vehicle states to the vehicle state space to ensure that the vehicle avoids the unstable drift region. While the autonomous driving system is controlling the vehicle in a stable drift, at operation604, signals associated with a driver of the vehicle performing simulated drift maneuvers on one or more interfaces of the vehicle, are obtained. As described above, vehicle interfaces may include the steering wheel, the accelerator pedal, the brake pedal, a hand brake, a clutch pedal, the gear shift lever of a manual transmission vehicle, or any other interface which allows a driver to interact with the motive systems of a vehicle. In some embodiments, drift control circuit210may obtain the signals associated with a driver performing simulated drift maneuvers from sensors152of vehicle10. A simulated drift maneuver is a manipulation of a vehicle interface by a driver which may communicate the driver's desire to (1) drift more aggressively (e.g. higher sideslip angle, higher yaw rate, higher linear velocity, etc.); (2) drift less aggressively (e.g. lower sideslip angle, lower yaw rate, lower linear velocity, etc.); or (3) exit the drift. As alluded to above, performance of a simulated drift maneuver is decoupled from direct manual control of the vehicle (corrected or otherwise). Put another way, while an autonomous control system maintains control of the vehicle throughout the drift, a driver may perform simulated drift maneuvers to communicate their desire to enter a particular drift condition. As an example, an inexperienced driver may attempt a clutch kick which would be unsuccessful if he/she had direct manual control of the vehicle (e.g. the driver released the clutch pedal too slowly, or the driver failed to raise the engine speed in combination with the clutch kick). However, the imperfect clutch kick may still communicate the driver's desire to increase the aggressiveness of the drift to the autonomous driving system. Put another way, the autonomous driving system may understand what the driver intended to do—perform a clutch kick in order to increase the aggressiveness of the drift—and the autonomous system may adjust the drift condition of the vehicle accordingly. Examples of simulated drift maneuvers may include counter-steering, increasing pressure on the accelerator pedal (i.e. increasing throttle), engaging the hand brake (e.g. in an attempt at a handbrake turn), and clutch kicking. As will be described in more detail below, the aforementioned examples may communicate a driver's desire to enter a more aggressive drift. However, simulated drift maneuvers may also communicate a driver's desire to enter a less aggressive drift, or exit a drift. For example, applying gradual pressure on the brake pedal, or releasing pressure on the accelerator pedal are simulated drift maneuvers that may communicate a driver's desire to enter a less aggressive drift, or exit a drift. Finally, it should be understood that a driver may perform multiple simulated drift maneuvers at the same time, or in quick succession. For example, a driver may begin to counter-steer while applying steady pressure on the accelerator pedal, and then attempt a clutch kick while pulsing the accelerator pedal, all while continuing to counter-steer. As alluded to above, sensors152of vehicle10may detect these simulated drift maneuvers and communicate them to drift control circuit210. At operation606, one or more simulated drift maneuvers are interpreted as a request for a desired drift condition. In some embodiments, drift control circuit210may make this interpretation. Similar to a vehicle state, a drift condition is a set of operational conditions associated with a vehicle while drifting. For example, while drifting at the apex of a first curve, a vehicle may have a sideslip angle of 1.4 rad, a yaw rate of −1.2 rad/s, and a linear velocity of 15 m/s. While drifting at the apex of a second curve, the vehicle may have sideslip angle of 1.9 rad, a yaw rate of 1.7 rad/s, and a linear velocity of 11 m/s. In these examples, the vehicle's drift condition is represented by three parameters: sideslip angle, yaw rate, and linear velocity. However, in other embodiments a vehicle's drift condition may be represented by n parameters. During operation606, different simulated drift maneuvers may be interpreted as requests for different desired drift conditions. For example, increased counter-steering, increased throttle, and the performance of a clutch kick may be interpreted as requests for a more aggressive drift condition (i.e. higher sideslip angle, higher yaw rate, higher linear velocity etc.). By contrast, decreased counter-steering, decreased throttle, and gradual braking may be interpreted as requests for a less aggressive drift condition (i.e. lower sideslip angle, lower yaw rate, lower linear velocity, etc.). In addition, it should be understood that (1) simulated drift maneuvers can be nuanced, and may be interpreted as such; and (2) multiple simulated drift maneuvers may be interpreted at once. For example, a relatively gradual increase in counter-steering may be interpreted as a request for aggressive drift condition 1, while a more dramatic increase in counter-steering combined with an increase in throttle may be interpreted as a request for aggressive drift condition 2, while an increase in counter-steering combined with a clutch kick may be interpreted as a request for aggressive drift condition 3. When effectuated, each of the aforementioned aggressive drift conditions may provide a different experience for the driver. For example, aggressive drift condition 2 may have a relatively higher linear velocity than aggressive drift condition 3, but a lower yaw rate. Accordingly, by providing a driver with multiple vehicle interfaces on which to perform simulated drift maneuvers (e.g. steering wheel, accelerator pedal, clutch pedal, brake pedal etc.), embodiments of the disclosed technology allow the driver to communicate a broader and more nuanced range of desired drift conditions than would a system which only provides one or two vehicle interfaces to a driver on which to perform simulated drift maneuvers. In some embodiments, simulated drift maneuvers may be interpreted using learned models built from data generated by expert human drivers performing various drift maneuvers. More specifically, learned models may be built which correlate sensor data generated by expert drivers (i.e. steering angle, time derivatives of steering angle, accelerator/clutch pedal position, time derivatives of accelerator/clutch pedal position, etc.) with the drift condition of a vehicle (i.e. sideslip angle, yaw rate, linear velocity, etc.). These models may then be used to interpret simulated drift maneuvers performed by another driver. In some embodiments, vehicle state data and environmental situation data may be considered when interpreting simulated drift maneuvers. For example, knowledge of the sideslip angle of the vehicle may be required to determine whether a driver is counter-steering, or steering into a turn. Moreover, depending on the aggressiveness of the current drift (e.g. sideslip angle, yaw rate, linear velocity, etc.), a driver's counter-steering at a particular rate may communicate a desire to increase, or decrease the aggressiveness of the drift. Likewise, road features, such as road path, may be used to interpret a simulated drift maneuver. For example, a particular simulated drift maneuver may be interpreted differently when the vehicle is approaching a curve in the road, than it would while at the apex of the curve. At operation608, the desired drift condition is limited to a stable drift condition which approximates the desired drift condition, i.e. the “stabilized desired drift condition.” Put another way the stabilized desired drift condition is a stable drift condition which approximates the desired drift condition. In some embodiments, the stabilized desired drift condition may be the closest possible stable drift condition. In other embodiments, the stabilized desired drift condition need not be the closest possible stable drift condition. As will be discussed in greater detail below, the desired drift condition may be limited across various parameters. For example, the desired drift condition may be limited to sideslip angles of less than or equal to 1.2 rad, yaw rates of less than or equal to 0.9 rad/s, and linear velocities of less than or equal to 13.5 m/s. In some embodiments, the desired drift condition may be limited by correlating it to a vehicle state space, e.g. vehicle state space400. For example, if the desired drift condition lies within, or at the boundary of a stable drift region (e.g. stable drift region420), the desired drift condition may not be limited. If however, the desired drift condition lies in an unstable drift region (e.g. unstable drift region430), the desired drift condition may be limited to the closest possible drift condition in the stable drift region of a vehicle state space. In some embodiments, vehicle state data and environmental situation data may be considered when limiting the desired drift condition. For example, factors such as vehicle speed, vehicle trajectory, road path, and obstacles in the vehicle's proximity may be considered when limiting the desired drift condition. Put another way, the desired drift condition may not only be limited to a stable drift condition, but a stable drift condition which meets a set of safety objectives (e.g. collision avoidance, staying on the road, etc.). Accordingly, the stabilized desired drift condition would not only be a drift condition which is controllable, but also one which meets a set of safety objectives. As described above, safety objectives may be determined by inputting vehicle state data and environmental situation data into pre-selected protocols. These protocols may be selected by different entities, such as the driver, an autonomous driving system, or the vehicle manufacturer. At operation610, control of the vehicle is optimized in order to achieve the stabilized desired drift condition. Control may be optimized across two or more actuators of the vehicle, such as the throttle, steering, clutch, and brake actuators of vehicle10. As alluded to above, the autonomous driving system may use a closed-loop controller, e.g. drift control circuit210, to optimize control of the vehicle in order to achieve the stabilized desired drift condition. In certain embodiments, the closed-loop controller may use a non-linear model predictive control (NMPC) framework in order to optimize control of the vehicle. Put another way, based on vehicle state and environmental situation, the system may use the NMPC framework to determine which set of vehicle actuators to use in order to achieve the stabilized desired drift condition. In some embodiments, learned models built from data generated by expert human drivers may be used to perform this optimization. Specifically, these models may learn how expert human drivers engage different vehicle inputs/actuators in order to achieve different drift conditions. In some embodiments, these learned models may be used in combination with an NMPC framework in order to optimize control of the vehicle. FIGS.7A and7Billustrate an example drift controller architecture. In the illustrated example, drift controller700receives inputs from steering angle sensor710, accelerator pedal position sensor712, brake pedal position sensor714, and clutch pedal position sensor716. As described above, steering angle sensor710may be coupled to the steering wheel of vehicle10, and may be connected to drift controller700by wire. Likewise, accelerator pedal position sensor712, brake pedal position sensor714, and clutch pedal position sensor716may be coupled to the accelerator, brake, and clutch pedals of vehicle10respectively, and may be connected to drift controller700by wire. Signals from the aforementioned sensors (i.e. signals associated with simulated drift maneuvers performed by a driver of the vehicle) are input into interpretation function720. Interpretation function720may be a linear or non-linear function. In some embodiments, interpretation function720may be derived using data generated by expert human drivers performing various drift maneuvers. For example, learned models may be built which correlate sensor inputs generated by these expert drivers, with changes in the drift condition of a vehicle. These models may then be used to interpret sensor inputs generated by other drivers. Based on the aforementioned inputs, interpretation function720outputs a desired drift condition. In this example, the desired drift condition has three parameters, sideslip angle (rad), yaw rate (rad/s), and linear velocity (m/s). In other embodiments, the desired drift condition may have additional parameters, such as slip angle of the rear tires, linear acceleration, etc. Although not pictured, as alluded to above, interpretation function720may also receive vehicle state data and environmental situation data as inputs. The desired drift condition is then input into stabilizing function730. Stabilizing function730limits the desired drift condition to a stable drift condition, i.e. the “stabilized desired drift condition.” Put another way the stabilized desired drift condition is a stable drift condition which approximates the desired drift condition. In some embodiments, the stabilized desired drift condition may be the closest possible stable drift condition. In other embodiments, the stabilized desired drift condition need not be the closest possible stable drift condition. Stabilizing function730limits the desired drift condition by limiting various operation state parameters. In the illustrated example, stabilizing function730may limit any combination of, sideslip angle, yaw rate, and linear velocity such that the resulting output is a stable drift condition. For example, stabilizing function730may limit the desired drift condition to sideslip angles of less than or equal to 1.2 rad, yaw rates of less than or equal to 0.9 rad/s, and linear velocities of less than or equal to 13.5 m/s. Thus, if interpretation function720outputs a desired drift condition with a sideslip angle of 1.5 rad, yaw rate of 1.4 rad/s, and linear velocity of 12 m/s, stabilizing function730will output a stabilized desired drift condition with a sideslip angle of 1.2 rad, a yaw rate of 0.9 rad/s, and a linear velocity of 12 m/s. In some embodiments, stabilizing function730may limit the desired drift condition by correlating it to a vehicle state space. For example, if the desired drift condition lies within, or at the boundary of the stable drift region (e.g. stable drift region420), stabilizing function730may not limit the desired drift condition. If however, the desired drift condition lies inside the unstable drift region (e.g. unstable drift region430), stabilizing function730may limit the desired drift condition to the condition on the stable drift region which is closest to the desired drift condition. Put another way, the output of stabilizing function730would be the drift condition in the stable drift region which is closest to the desired drift condition in a vehicle state space. Although not pictured, as alluded to above, stabilizing function730may also receive vehicle state data and environmental situation data as inputs. Accordingly, stabilizing function730may not just limit the desired drift condition to a stable drift condition, but a stable drift condition which meets a set of safety objectives, such as collision avoidance and staying on the road. The stabilized desired drift condition is input into optimization function740. In the illustrated example, optimization function740optimizes control of vehicle10across four actuators (steering actuator750, throttle actuator752, clutch actuator754, brake actuator756) in order to achieve the stabilized desired drift condition. However, in other embodiments, optimization function740may optimize control of vehicle10across n actuators. The output of optimization function740is a set of drift control signals sent to the actuators of the vehicle. For example, drift control signals sent to steering actuator750may include instructions to increase counter-steering in order to increase the aggressiveness of a drift, or decrease counter-steering in order to decrease the aggressiveness of a drift, or exit a drift. Likewise, drift control signals sent to throttle actuator752may include instructions to increase throttle in order to increase the aggressiveness of a drift, or decrease throttle in order to decrease the aggressiveness of a drift, or exit a drift. Drift control signals sent to clutch actuator754may include instructions to disengage and then rapidly reengage the clutch in order to use engine momentum to increase the aggressiveness of a drift. Further, drift control signals sent to brake actuator756may include instructions to increase braking in order to decrease the aggressiveness of a drift, or exit a drift. Finally, optimization function740may send multiple drift control signals simultaneously, or in quick succession to any combination of steering actuator750, throttle actuator752, clutch actuator754, and brake actuator756. For example, if optimization function740determines that the stabilized desired drift condition is best actualized by a clutch kick, optimization function740may output control signals to steering actuator750(increase counter-steering), throttle actuator752(increase throttle), and clutch actuator754(disengage and rapidly reengage the clutch in connection with increasing the throttle) in order to actualize the stabilized desired drift condition. As alluded to above, optimization function740may utilize a non-linear model predictive control (NMPC) framework in order to optimize control of the vehicle. Specifically, the NMPC framework may determine which set of actuators to use in order to actualize the stabilized desired drift condition, based in part on vehicle state and environmental situation data. In some embodiments, optimization function740may utilize learned models built from data generated by expert human drivers (as described above). In some embodiments, optimization function740may utilize these learned models in combination with an NMPC framework. Referring now toFIG.8, computing system800may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment, such as for example, one or more of the elements or circuits illustrated inFIGS.1A and1Band described herein. Computing system800might also represent computing capabilities embedded within or otherwise available to a given device. For example, a computing system might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals and other electronic devices that might include some form of processing capability. Computing system800might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor604. Processor604might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor (whether single-, dual- or multi-core processor), signal processor, graphics processor (e.g., GPU) controller, or other control logic. In the illustrated example, processor804is connected to a bus802, although any communication medium can be used to facilitate interaction with other components of computing system800or to communicate externally. Computing system800might also include one or more memory modules, simply referred to herein as main memory808. For example, in some embodiments random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor804. Main memory808might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor804. Computing system800might likewise include a read only memory (“ROM”) or other static storage device coupled to bus602for storing static information and instructions for processor804. The computing system800might also include one or more various forms of information storage mechanism810, which might include, for example, a media drive812and a storage unit interface820. The media drive812might include a drive or other mechanism to support fixed or removable storage media814. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), a flash drive, or other removable or fixed media drive might be provided. Accordingly, storage media814might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive812. As these examples illustrate, the storage media814can include a computer usable storage medium having stored therein computer software or data. In alternative embodiments, information storage mechanism810might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing system800. Such instrumentalities might include, for example, a fixed or removable storage unit822and an interface820. Examples of such storage units822and interfaces820can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a flash drive and associated slot (for example, a USB drive), a PCMCIA slot and card, and other fixed or removable storage units822and interfaces820that allow software and data to be transferred from the storage unit822to computing system800. Computing system800might also include a communications interface824. Communications interface824might be used to allow software and data to be transferred between computing system800and external devices. Examples of communications interface824might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX, Bluetooth® or other interface), a communications port (such as for example, a USB port, IR port, RS232 port, or other port), or other communications interface. Software and data transferred via communications interface824might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface824. These signals might be provided to communications interface824via a channel828. This channel828might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels. In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as, for example, memory808, storage unit820, media814, and channel828. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing system800to perform features or functions of the disclosed technology as discussed herein. It should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Instead, they can be applied, alone or in various combinations, to one or more other embodiments, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present application should not be limited by any of the above-described exemplary embodiments. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing, the term “including” should be read as meaning “including, without limitation” or the like. The term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof. The terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known.” Terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time. Instead, they should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “component” does not imply that the aspects or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various aspects of a component, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations. Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration. | 72,321 |
11858498 | DETAILED DESCRIPTION Overview The present disclosure pertains to autonomous vehicle loading with smart transportation platforms, as well as methods of use. Disclosed is the use of autonomous vehicle capabilities (either semi-autonomous or fully autonomous) to improve vehicle loading accuracy and throughput and to mitigate damage to the vehicle in which items are being loaded. In one example scenario, a process to load a vehicle autonomously with a driver inside may involve the driver selecting the vehicle that needs to be loaded and shipped. The driver enters the vehicle and drives it to a transportation platform (could be a trailer, train, ship, or the like). The driver can actuate a self-loading system of the vehicle. For example, the driver can select a button provided through an infotainment system of the vehicle to initiate a self-load/park procedure for the vehicle. In response, an autonomous vehicle controller can assume command and cause the vehicle to load/park using the sensors that are installed in the vehicle. For example, vehicle sensors can be used to detect the sides of the rail car or trailer and adjacent vehicles (such as another vehicle in front of itself). When the vehicle reaches the load/parking spot, the driver turns off the vehicle and leaves the vehicle. A first process to load the vehicle autonomously onto a transportation platform without the driver inside (a driver can be present if desired) can include a dispatch service creating an electronic loading schedule. The electronic schedule can include specifying an assignment of cars to the transportation platform, which can be defined by GPS location information. An identifier for the transportation platform can also be included. Dispatch connects to vehicles over a network connection (could be long or short-range). Vehicles can autonomously navigate the defined location to be loaded according to the schedule provided by dispatch. An autonomous controller of the vehicle can use GPS and other sensors (camera) to read visual markers on the transportation platform to confirm that the vehicle is in the correct location. In one example, a staging coordinator, such as a human, can confirm that the subject vehicle is the correct vehicle to be loaded onto a transportation platform. The staging coordinator may push a button on the infotainment system to have the vehicle self-load/park. Vehicle sensors can detect the sides of the transportation platform and the vehicle in front of itself during autonomous movement. When the vehicle reaches a designated parking spot, the vehicle may then shut down automatically. In another example, verification by the staging coordinator is not required and the vehicle can initiate a self-load/park. A second process to load the vehicle autonomously without the driver inside can involve dispatch transmitting instructions a vehicle to initiate on a start self-load program. The vehicle turns on and searches for an assigned transportation platform using coordinates provided in the instructions. The vehicle can autonomously navigate to a general location of the transportation platform using the coordinates. A location beacon can be activated on the transportation platform to help vehicles find the transportation platform. The vehicle controller can use GPS and other sensors (camera) to read visual markers on the transportation platform to confirm the correct location. The vehicle arrives at the transportation platform and communicates with a controller on the transportation platform car to verify that the vehicle is at the correct transportation platform. The transportation platform can transmit instructions to the vehicle where to position itself for loading and guide it with installed sensors. When the vehicle reaches the parking spot the vehicle can shut down. Illustrative Embodiments Turning now to the drawings,FIG.1depicts an illustrative architecture100in which techniques and structures of the present disclosure may be implemented. The architecture100includes a vehicle102, a transportation platform104, an orchestration service106, and a network108. Some or all of these components in the architecture100can communicate with one another using the network108. The network108can include combinations of networks that enable the components in the architecture100to communicate with one another. The network108may include any one or a combination of multiple different types of networks, such as cellular, cable, the Internet, wireless networks, and other private and/or public networks. The network can include both short and long-range wireless networks. In one scenario, the vehicle102is can be operated fully- or semi-autonomously. For example, the vehicle102can operate fully autonomously when no driver is present. The vehicle102can be operated in a semi-autonomous manner when a user is present in the vehicle. As will be discussed, a user can drive the vehicle102and remain present until an autonomous loading and parking procedure is complete. This allows the user to remain in control of the vehicle in case of an emergency, such as when the autonomous function of the vehicle errs. In other instances, the vehicle102can be configured to operate entirely without user involvement. For example, the vehicle102can include autonomous features that interact with the transportation platform104and/or the orchestration service106to successfully load the vehicle102onto the transportation platform104in a designated space, as will be discussed in various use cases herein. The vehicle can comprise an autonomous vehicle (hereinafter AV controller120) that can include a processor122and memory124. The processor122executes instructions stored in memory124to perform the functions and methods attributed to the vehicle102. When referring to actions performed by the vehicle102, the AV controller120, and/or the processor122, this includes the execution of instructions by the processor122stored in memory124. A communications interface126can be used by the processor122to transmit and/or receive data over the network108. The vehicle102can comprise a sensor platform128can include sensors directed to attachment sites on the vehicle102. The sensor platform128can include various include sensors mounted on the vehicle, such as cameras, LIDAR (light imaging and ranging), IR (infrared), ultrasonic, location sensing (such as GPS), and the like. The AV controller120can be configured to receive and process signals or other data from each of the individual sensors of the sensor platform128to assist in performing any of the autonomous or semi-autonomous vehicle loading and/or parking procedures disclosed herein. The transportation platform104can include any suitable structure that is capable of receiving one or more vehicles for transportation. For example, the transportation platform104can include a storage container, a trailer, a ship, or other similar platforms. In general, the vehicle102can be loaded onto the transportation platform104for subsequent transportation to a delivery location. In one example, the transportation platform104can include a truck and trailer that includes multiple slots or parking spots where the vehicle102can be located during transportation. The transportation platform104includes a physical structure that supports the vehicle102. A trailer can include rails or other ramps used to support the vehicle. The vehicle can be driven onto the trailer and secured for transport. During loading, care is taken to ensure that the vehicle does not drive off of the rails or other ramps, which may damage the vehicle. In another example, the transportation platform104could include a shipping container with sidewalls. The vehicle can be driven into the shipping container and secured for transport, making sure that during the loading process, a distance between the sidewalls and the vehicle is maintained to avoid vehicle and/or transportation platform104. In another example, the transportation platform104could include a ferry or ship that has a plurality of parking spots located on a deck or level of the ship. The vehicle can be driven into a specific parking sport and secured for transport, making sure that during the loading process the vehicle does not impact other vehicles. In general, in any loading process where more than one vehicle is loaded on the transportation platform104, care should be taken to ensure that the vehicle does not hit another vehicle being transported on the same transportation platform. As a general matter, any loading procedure for the vehicle onto a transportation platform can involve ensuring that the vehicle avoids damage due to improper loading, be it from the vehicle being improperly driven onto the transportation platform, the vehicle hitting a structure of the transportation platform, and/or the vehicle hitting another vehicle. The transportation platform104can comprise a parking spot130for the vehicle102. As noted above, the transportation platform104can include a plurality of parking spots for multiple vehicles. The transportation platform104can also comprise a transportation platform controller (hereinafter platform controller132), which can include a processor134and memory136. The processor134executes instructions stored in memory136to perform the functions and methods attributed to the transportation platform104. When referring to actions performed by the transportation platform104, the platform controller132, and/or the processor134, this includes the execution of instructions by the processor134stored in memory136. A communications interface138can be used by the processor134to transmit and/or receive data over the network108. The transportation platform104can also include a sensor platform140. The sensor platform128can include various include sensors mounted on the vehicle, such as cameras, LIDAR (light imaging and ranging), IR (infrared), ultrasonic, location sensing (such as GPS), and the like. Examples of transportation platform sensors will be described in various use cases below. The orchestration service106can function as a dispatch service that orchestrates processes used in the loading of the vehicle102onto the transportation platform104. The orchestration service106can include a server or cloud that is programmed to provide vehicle loading and logistics methods disclosed herein. The orchestration service106can communicate with the vehicle102and/or the transportation platform104over the network108using any combination of hardware and/or software that would be known to one of ordinary skill in the art. In order to elucidate various vehicle loading methods enabled by the present disclosure, various scenarios are provided herein. Each of these scenarios is disclosed in flowchart format inFIGS.2-7. The scenarios will be discussed individually. It will be understood that these scenarios are provided for purposes of exemplifying use cases where the systems and methods can be deployed. These examples are not intended to be limiting. FIGS.2and3collectively illustrate a method involving a semi-autonomous vehicle loading scenario.FIG.2is a flowchart of the method that is schematically illustrated inFIG.3. It will be understood that some references toFIG.1may be included for context. The method includes the vehicle102(which includes a vehicle with autonomous driving capabilities) being loaded onto the transportation platform104. In this particular implementation, the transportation platform104is a trailer that includes parking spots for several vehicles. In general, the method includes a step202of a driver identifying the vehicle102as a vehicle that requires loading onto the trailer of the transportation platform104. The driver could be provided with a list of vehicles from the orchestration service106, for example. The driver can enter the vehicle102and drive it onto a ramp of the trailer in step204. The driver can activate the self-loading procedure by selecting a button302on a human-machine interface (HMI304) of the vehicle102in step206. This self-loading procedure can be activated when the vehicle is on the trailer or before the vehicle is drive up the ramp and onto the trailer. When activated, the AV controller120(seeFIG.1) may take over the remainder of the loading procedure by activating sensors of the sensor platform128(also seeFIG.1) in step208. For example, the AV controller120can activate cameras positioned on the vehicle102to obtain images of the trailer. Using image processing, the AV controller120can detect a driving path for the vehicle102that is converted into instructions used to autonomously navigate the vehicle102into position. For example, the AV controller120can use images to detect the edges of the ramp or rails, as well as adjacent vehicles (such as adjacent vehicle306) in step210. The AV controller120can cause the vehicle102to remain on these structures as it drives onto the trailer into the assigned spot. In some instances, the AV controller120can navigate the vehicle to a specific location on the transportation platform. For example, each spot on the trailer may be identified using a visual indicator, such as an icon, quick response (QR) code, barcode, or the like. The AV controller120can scan for the relevant visual indicator using camera images. When the vehicle arrives at its designated spot, the driver can turn off the engine in step212. In other instances, the AV controller120can turn off the vehicle engine when the designated spot has been reached. During parking, the AV controller120can also use proximity sensors to maintain a specified distance between the vehicle102and an adjacent vehicle in front of it on the transportation platform. As noted above, if the driver is present, the driver can maintain control of the vehicle to ensure that it does not impact the transportation platform and/or any adjacent vehicles. If the transportation platform were to be a shipping container or rail car rather than a trailer, the AV controller120can use the output of vehicle sensors to detect sides of the rail car or trailer and the vehicle in front of itself. FIGS.4and5collectively illustrate a method involving a semi-autonomous vehicle loading scenario.FIG.4is a flowchart of the method that is schematically illustrated inFIG.5. It will be understood that some references toFIG.1may be included for context. In general, the scenario depicted involves a dispatch service (e.g., orchestration service106) initiating a process for loading a vehicle onto a transportation platform. The orchestration service106can create an electronic loading schedule (assigns cars to transportation platforms and adds GPS location information for the transportation platforms) as in step402. The orchestration service106transmits the loading schedule to vehicles or users in anticipation of vehicle loading in step404. In step406, the AV controller120processes the loading schedule and activates the vehicle, causing it to autonomously navigate to a location of the assigned transportation platform (included in the loading schedule). In this example, the vehicle102is in a parking lot or other staging area502when it receives the loading schedule. The AV controller120causes the vehicle102to navigate a path504to the transportation platform104. The vehicle can use GPS and other sensors (cameras) to read visual markers on the trailer to confirm that it is in the correct location for loading in step408. In one example method, a staging coordinator (which can be a human) confirms that the vehicle102is in the correct location and should be loaded onto the trailer in step410. If confirmed, the method can include a step412of the staging coordinator pushing a button on the HMI of the vehicle to initiate an autonomous loading procedure. In an alternative method, which does not involve the staging coordinator, the method bypasses steps410and412. As noted above, the AV controller120may execute the loading procedure by activating sensors of the sensor platform128as in step414. For example, the AV controller120can activate cameras positioned on the vehicle102to obtain images of the trailer. Using image processing, the AV controller120can detect a driving path for the vehicle102that is converted into instructions used to autonomously navigate the vehicle102into position. For example, the AV controller120can use images to detect the edges of the ramp or rails, as well as adjacent vehicles. The AV controller120can cause the vehicle102to remain on these structures as it drives onto the trailer into the assigned spot. In some instances, the AV controller120can navigate the vehicle to a specific location on the transportation platform. For example, each spot on the trailer may be identified using a visual indicator, such as an icon, quick response (QR) code, barcode, or the like. The AV controller120can scan for the relevant visual indicator using camera images. When the vehicle arrives at its designated spot, the driver can turn of the engine in step416. In other instances, the AV controller120can turn off the vehicle engine when the designated spot has been reached. During parking, the AV controller120can also use proximity sensors to maintain a specified distance between the vehicle102and an adjacent vehicle in front of it on the transportation platform. As noted above, if the driver is present, the driver can maintain control of the vehicle to ensure that it does not impact the transportation platform and/or any adjacent vehicles. If the transportation platform were to be a shipping container or rail car rather than a trailer, the AV controller120can use the output of vehicle sensors to detect sides of the rail car or trailer and the vehicle in front of itself. FIGS.6and7collectively illustrate a method involving an autonomous vehicle loading scenario.FIG.6is a flowchart of the method that is schematically illustrated inFIG.7. It will be understood that some references toFIG.1may be included for context. In contrast with the method and schematic ofFIGS.4and5, this process is fully automated, allowing the vehicle to load without human intervention after dispatch. In general, the scenario depicted involves a dispatch service (e.g., orchestration service106) initiating a process for loading a vehicle onto a transportation platform. The orchestration service106can create an electronic loading schedule (assigns cars to transportation platforms and adds GPS location information for the transportation platforms). The orchestration service106transmits the loading schedule to vehicles or users in anticipation of vehicle loading in step602. In step604, the AV controller120processes the loading schedule and activates the vehicle, causing it to autonomously navigate to a general location of the assigned transportation platform (included in the loading schedule). In this example, the vehicle102is in a parking lot or other staging area702when it receives the loading schedule. The AV controller120causes the vehicle102to navigate a path704to a general location where the transportation platform104is located. The transportation platform104can comprise a beacon706that can be activated in step606and used to broadcast identifying information. That is, a trailer location beacon can be used to help vehicles find the transportation platform in a lot or other location. The AV controller120receives the signals from the beacon706over the air (e.g., using short-range wireless communications) and uses these signals to home in on the exact location of the transportation platform (the beacon signals could also include an identifier of the parking spot that the vehicle will occupy during transportation). The AV controller120can continue to use sensor platform signals or output to navigate the vehicle to the transportation platform104, following the beacon signals in step608. In one example method, when the vehicle arrives at the transportation platform104after following the beacon signals, the AV controller120can read visual indicators on the transportation platform104to confirm that the vehicle is at the correct location in step610. Also, when the vehicle arrives at the transportation platform104, the AV controller120can communicate with the platform controller132of the transportation platform104to verify that the vehicle is about to be loaded onto the correct transportation platform. The platform controller132can maintain a manifest or another electronic record of which vehicles are to be loaded. In one example, the AV controller120can transmit a vehicle identifier such as vehicle identification number (VIN) to the platform controller132. The platform controller132can check the VIN against the manifest to confirm that the vehicle should be loaded. The platform controller132can also maintain a schedule that indicates which vehicles should be loaded and in what order. If the vehicle102is attempting to load out of order, the platform controller132may transmit a signal or message to the AV controller120to indicate that the vehicle102should wait. For example, if the vehicle that is scheduled to be loaded before the vehicle102has not yet arrived or been loaded, the platform controller132can transmit a message to the AV controller120. The AV controller120can cause the vehicle102to autonomously move to a holding location near the transportation platform104to wait for the other vehicle to load. When the other vehicle has been loaded, the platform controller132can transmit another message to the AV controller120to attempt autonomous/self-loading again. If the other vehicle does not arrive or cannot be loaded, the platform controller132can allow the vehicle102to load. This missing vehicle can be reported back to the orchestration service106by the platform controller132in a message transmitted over the network108. When it is confirmed for the vehicle102to self-load, the AV controller120may execute the loading procedure by activating sensors of the sensor platform128as in step612. In one example, the platform controller132can tell the AV controller120where to position the vehicle for loading and guide the vehicle with installed sensors. For example, the transportation platform104can comprise sensor(s), such as a sensor708, that emit signals that can be followed by the AV controller120. The sensor708could include an ultrasonic sensor that emits an ultrasonic signal. The sensor platform of the vehicle102can include a receiver that receives the ultrasonic signals. The AV controller120aligns the receiver with the ultrasonic signal emitted by the sensor708when navigating the vehicle102. When the vehicle arrives at its designated spot, the driver can turn of the engine in step614. In other instances, the AV controller120can turn off the vehicle engine when the designated spot has been reached. As with other methods, during parking, the AV controller120can also use proximity sensors to maintain a specified distance between the vehicle102and an adjacent vehicle in front of it on the transportation platform. As noted above, if the driver is present, the driver can maintain control of the vehicle to ensure that it does not impact the transportation platform and/or any adjacent vehicles. If the transportation platform were to be a shipping container or rail car rather than a trailer, the AV controller120can use the output of vehicle sensors to detect sides of the rail car or trailer and the vehicle in front of itself. FIG.8is a flowchart of an example method. The method can include a step802of receiving a request to activate a self-loading procedure for an autonomous vehicle. As noted above, the request can be determined from user input obtained through a human-machine interface of the vehicle. In another example, the request can be determined from a dispatch service in a message transmitted over a network. The request can include GPS coordinates that identify a location of a transportation platform that the vehicle will be loaded onto for transport, such as a trailer, shipping container, or rail car—just to name a few. The request can also include information used by the vehicle to identify the transportation platform. This can include information that can be visually apprehended or read from a visual indicator placed on the transportation platform. In sum, the request or message from the dispatch service can comprise an identifier for the transportation platform and a location of the transportation platform. As noted above, rather than being transmitted by the dispatch service, the same information can be transmitted to the vehicle by a transportation platform controller. The method can include a step804of executing the self-loading procedure by an autonomous vehicle controller. The self-loading procedure can involve a step806of causing the autonomous vehicle to navigate to a transportation platform using the location information provided to the vehicle. Next, the method can include a step808of identifying the transportation platform using output from a sensor platform of the autonomous vehicle. For example, one method for identifying the transportation platform can include reading a visual indicator on the transportation platform. This could include a barcode or QR code printed somewhere on the trailer in a location that can be viewed by a camera of the vehicle. Once the vehicle confirms that it has arrived at the assigned transportation platform, the method can include a step810of causing the autonomous vehicle to navigate onto or into the transportation platform and park at a parking spot of the transportation platform designated for the autonomous vehicle. In one configuration, if the vehicle inadvertently arrives at an incorrect location, a controller of the vehicle could be configured to broadcast a message to the transportation platform to begin beacon broadcasting (assuming the transportation platform has been so equipped). The transportation platform can cause its beacon to begin transmitting a signal used by the vehicle to home in on the transportation platform. This may be advantageous in instances where the GPS coordinates for the transportation platform are errant or when the transportation platform may be in a different location that was initially expected by the dispatch service when the vehicle was initially dispatched for loading onto the transportation platform. This can include determining, by the autonomous vehicle controller using the output of the sensor platform, physical structures of the transportation platform such as a ramp, rails, sidewalls, or other physical structures. Next, the method can include navigating the autonomous vehicle into the parking spot in such a way as to avoid the autonomous vehicle contacting the physical structures and space the autonomous vehicle away from an adjacent autonomous vehicle. Implementations of the systems, apparatuses, devices and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. An implementation of the devices, systems and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims may not necessarily be limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the present disclosure. For example, any of the functionality described with respect to a particular device or component may be performed by another device or component. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments may not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments. | 29,786 |
11858499 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1. Configurations of Vehicle Control Device and Vehicle With reference toFIG.1, the configuration of a vehicle control device10of the present embodiment and the configuration of a vehicle1on which the vehicle control device10is mounted will be described. The vehicle1includes a steering unit60including an electric power steering (EPS)61, a drive unit70including a transmission71and a not-illustrated drive portion (an electric motor, an engine, or the like), and a brake unit80including an electric parking brake (EPB)81and an electric servo brake (ESB)82. The transmission71has, as the shift positions for the forward direction, a drive (D) position and a brake (B) position which has a larger speed reduction ratio than the D position. The D position corresponds to a first forward position of the present disclosure, and the B position corresponds to a second forward position of the present disclosure. Note that the name of the second forward position may be a sport (S) position, a low (L) position, or the like instead of the B position. In addition, the transmission71has a reverse (R) position which is the shift position for the backward direction, a neutral (N) position, and a parking (P) position. The R position corresponds to a reverse position of the present disclosure. The vehicle1also includes surrounding cameras40(including a front camera, a rear camera, a right-side camera, and a left-side camera) that capture images of the surroundings of the vehicle1and a set of sonars41(including a set of front sonars, a set of rear sonars, a set of right-side sonars, and a set of left-side sonars) that detects target objects present around the vehicle1. The vehicle1further includes a speed sensor42that detects the traveling speed of the vehicle1, a steering-angle sensor43that detects the steering angle of the steering wheel (not illustrated), a brake-pedal sensor44that detects the degree of pressing-down of the brake pedal (not illustrated), an accelerator-pedal sensor45that detects the degree of pressing-down of the accelerator pedal (not illustrated), a shift switch46, an automatic parking switch47, and a display48. The shift switch46includes a P switch46a, an R switch46b, an N switch46c, and a D switch46dwhich are switches for switching the shift position of the transmission71. The D switch46dis a switch that gives an instruction to switch to the D position and the B position, and corresponds to a single operation element of the present disclosure. The automatic parking switch47gives an instruction to execute automatic parking which will be described later. The display48is, for example, a display audio (DA), a multi information display (MID), or the like. The vehicle control device10is a control unit including a processor20, memory, and a not-illustrated interface circuit. The vehicle control device10receives input of images captured by the surrounding cameras40, detection information on target objects detected by the set of sonars41, speed detection signals detected by the speed sensor42, steering-angle detection signals detected by the steering-angle sensor43, detection signals on the degree of pressing-down detected by the brake-pedal sensor44, detection signals on the degree of pressing-down detected by the accelerator-pedal sensor45, operation signals of the shift switch46, and operation signals of the automatic parking switch47. The vehicle control device10outputs control signals that control the display content on the display48. The vehicle control device10outputs control signals to control the operation of the steering unit60, the drive unit70, and the brake unit80, and detection signals of not-illustrated various sensors included in each unit60,70, or80are input to the vehicle control device10. The processor20reads and executes a control program31for the vehicle1stored in the memory30to control the operation of the vehicle1and functions as a shift-position-switching acceptance unit21, a driving control unit22, and an automatic-parking control unit23. The shift-position-switching acceptance unit21recognizes the operational condition of the switches46ato46dfrom the operation signals output from the shift switch46, and accepts switching to the shift position according to the operated switch. Then, the driving control unit22switches the shift position of the transmission71according to the switching operation of the shift position accepted by the shift-position-switching acceptance unit21. As for the D switch46d, when the D switch46dis operated in the state in which the shift position of the transmission71is set at D, the shift-position-switching acceptance unit21accepts this operation as the operation to switch to the B position. In addition, when the D switch46dis operated in the state in which the shift position of the transmission71is set at one of the P, R, and N positions, the shift-position-switching acceptance unit21accepts this operation as the operation to switch to the D position. With this configuration, it is impossible to switch to the B position without via switching to the D position when the shift position of the transmission71is set at one of the P, R, and N positions. In other words, in this specification, the driver can switch from the P, R, or N position to the B position only after switching to the D position. The driving control unit22recognizes the operation of the steering, the brake pedal, the accelerator pedal, the shift switch46, and the like by the driver of the vehicle1from detection signals of the steering-angle sensor43, the brake-pedal sensor44, the accelerator-pedal sensor45, the shift switch46, and the like. Then, according to the operation of these, the driving control unit22controls the operation of the steering unit60, the drive unit70, and the brake unit80to control the traveling of the vehicle1. The automatic-parking control unit23, when the automatic parking switch47is operated, executes automatic parking control which makes the vehicle1automatically travel to an empty slot110of a parking lot100and completes parking, as illustrated inFIG.2. InFIG.2, the driver of the vehicle1stops the vehicle1near the empty slot110(the state of T1), and operates the automatic parking switch47to give an instruction to execute the automatic parking. The automatic-parking control unit23executes the following processes in the automatic parking control. The automatic-parking control unit23first recognizes the parking-slot borders of the empty slot110from images captured by the surrounding cameras40to recognize the real space position of the empty slot110. The automatic-parking control unit23also recognizes whether obstacles are present around the empty slot110, based on images captured by the surrounding cameras40and detection information on target objects detected by the set of sonars41. Then, based on these recognition results, the automatic-parking control unit23generates a target path from the current position of the vehicle1to the empty slot110. Next, the automatic-parking control unit23controls the operation of the steering unit60, the drive unit70, and the brake unit80to make the vehicle1automatically travel along the target path to the empty slot110and stop, and thus completes parking of the vehicle1into the empty slot110. In the example ofFIG.3, the vehicle1moves forward to be in the state of T2, then moves backward, and stops in the empty slot110as indicated by T3. 2. Processes of Automatic Parking Control Based on the flowcharts illustrated inFIGS.3to5, the execution procedure of processes of the automatic parking control will be described. In step S1inFIG.3, the automatic-parking control unit23, upon recognizing that the automatic parking switch47is operated, advances the process to step S2and starts controlling the automatic parking at the shift position of one of D, R, and B selected by the driver. In the succeeding step S3, the automatic-parking control unit23determines whether a switching operation to switch to the shift position of one of D, R, and N has been accepted by the shift-position-switching acceptance unit21. Then, the automatic-parking control unit23, if such the switching operation has been accepted, advances the process to step S30inFIG.4, and if the switching operation has not been accepted, advances the process to step S4. In step S4, the automatic-parking control unit23determines whether a switching operation to the P position has been accepted by the shift-position-switching acceptance unit21. Then, the automatic-parking control unit23, if the switching operation has been accepted, advances the process to step S60inFIG.5, and if the switching operation has not been accepted, advances the process to step S5. In step S5, the automatic-parking control unit23determines whether the time has come to switch from the R or P position to a forward position in the automatic traveling along the target path. Then the automatic-parking control unit23, if the time for switching has come, advances the process to step S20, and, if the time for switching has not come, advances the process to step S6. In step S20, the automatic-parking control unit23switches the shift position of the transmission71to the D position to move the vehicle1forward and continues controlling the automatic parking, and it advances the process to step S6. With the process in step S20, even in the case in which control of the automatic parking starts in step S2in the state in which the shift position of the transmission71is set at the B position, when moving the vehicle1forward after the shift position is switched to the R or P position, the shift position is set to the D position. With this operation, when switching from the R or P position to a forward position, the forward position is set to the D position, as in the foregoing process by the shift-position-switching acceptance unit21responding the driver's operation of the shift switch46. Thus, it is possible to avoid the sense of incongruity given to the driver, which would be caused by the difference between the manual operation and the operation when switching from the R or P position to the B position as a forward position. In step S6, the automatic-parking control unit23determines whether the traveling of the vehicle1by the target path has finished. Then, the automatic-parking control unit23, if the traveling of the vehicle1by the target path has finished, advances the process to step S7, and if the traveling of the vehicle1by the target path has not finished, advances the process to step S3. In step S7, the automatic-parking control unit23switches the shift position of the transmission71to the P position and ends the automatic parking control. FIG.4is a flowchart of processes for discontinuing the automatic parking control. In step S30inFIG.4, the automatic-parking control unit23stops the vehicle1and discontinues the control of the automatic parking. In the next step S31, the automatic-parking control unit23determines whether the automatic parking control has been discontinued by operation of the D switch46d. Then, the automatic-parking control unit23, if the automatic parking control was discontinued by operation of the D switch46d, advances the process to step S50, and, if the automatic parking control is discontinued by operation of the R switch46bor the N switch46c, advances the process to step S32. In step S50, the automatic-parking control unit23determines whether the time elapsed since the time point when the shift position of the transmission71was switched from the R position to the D position is within a specified time (for example, 0.3 to 0.5 seconds). Then, the automatic-parking control unit23, if the time elapsed since the time point of switching from the R position to the D position is within the specified time, advances the process to step S52, and, if the time elapsed since the time point of switching from the R position to the D position has exceeded the specified time, advances the process to step S51. In step S52, the automatic-parking control unit23sets the shift position of the transmission71to the D position and advances the process to step S32. With this operation, in the case in which because the switching operation to the D position was made just after the shift position was switched from the R position to the D position, it is inferred that the driver has an intention to switch to the D position in the state of the R position, it is possible to switch to the D position according to the driver's intention. In step S51, the automatic-parking control unit23sets the shift position of the transmission71to the B position and advances the process to step S32. With this operation, in the case in which because the D switch46dwas operated after a certain time had passed since the shift position had been switched from the R position to the D position, it is inferred that the driver, recognizing that the shift position was set at the D position, intended to switch to the B position and operated the D switch46d, it is possible to switched to the B position according to the driver's intention. In step S32, the automatic-parking control unit23, as illustrated inFIG.6, displays a confirmation screen90on the display48prompting the driver to make a selection to resume or cancel the automatic parking. InFIG.6, the vehicle1started the automatic parking from the state of T1and is being stopped because an operation to discontinues the automatic parking was made in the state of T4. In the next step S33, the automatic-parking control unit23determines whether “resume” has been selected by operation of a not-illustrated selection switch. Then, if “resume” is selected, the automatic-parking control unit23advances the process to step S55, resumes control of the automatic parking with the selected shift position, and advances the process to step S3inFIG.3. In the example ofFIG.6, with the resumption of control of the automatic parking, the vehicle1, as indicated by T6, automatically travels toward the empty slot110. If “resume” is not selected, the automatic-parking control unit23advances the process from step S33to step S34, and determines whether “cancel” has been selected by operation of the not-illustrated selection switch. Then, the automatic-parking control unit23, if “cancel” has been selected, advances the process to step S35, and, if “cancel” has not been selected, advances the process to step S33. In step S35, the automatic-parking control unit23ends the control of the automatic parking. In the succeeding step S36, the automatic-parking control unit23determines whether the selected shift position is N. Then, the automatic-parking control unit23, if the selected shift position is N, advances the process to step S56, and, if the selected shift position is not N, advances the process to step S38. In step S56, the automatic-parking control unit23switches the shift position to P, and puts the vehicle1in a stopped state. In step S37, the automatic-parking control unit23keeps the selected shift position of the transmission71. With the cancellation of the automatic parking control, in the example ofFIG.6, the vehicle1transitions to ordinary traveling by the driver's operation as indicated by T5. Next,FIG.5is a flowchart of processes for canceling the automatic parking control. In step S60inFIG.5, the automatic-parking control unit23cancels the automatic parking control. In the succeeding step S61, the automatic-parking control unit23determines whether the traveling speed of the vehicle1detected by the speed sensor42is lower than or equal to a specified lower-limit speed (for example, 2 km/h). Then, the automatic-parking control unit23, if the traveling speed of the vehicle1is higher than the lower-limit speed, advances the process to step S70and, if the traveling speed of the vehicle1is lower than or equal to the lower-limit speed, advances the process to step S62. In step S70, the automatic-parking control unit23sets the shift position to N. In the next step S71, the automatic-parking control unit23, upon recognizing from the speed detected by the speed sensor42that the vehicle1has stopped, advances the process to step S61. In step S62, the automatic-parking control unit23sets the shift position of the transmission71to the P position and puts the vehicle1in a stopped state. FIG.7illustrates, as an example, a case in which the automatic parking control of the vehicle1starts from the state of T1, and in the state of T7, the automatic parking control is canceled by the operation of the P switch46aby the driver. The shift position of the transmission71is set to the P position at T7, and the vehicle1is put into a stopped state. Then, in the example ofFIG.7, the vehicle1transitions to ordinary traveling by the driver's operation as indicated by T8. 3. Other Embodiments In the above embodiment, through processes in step S31and steps S50to S52inFIG.4, operation of the D button switches the shift position to either the D position or the B position depending on the time elapsed since the time point of switching from the R position to the D position. However, as another embodiment, this process may be eliminated, and operation of the D button in the state of the D position may indiscriminately switch the shift position to the B position. Although in the above embodiment, when the automatic parking control is discontinued, the automatic-parking control unit23makes notification by displaying the confirmation screen90prompting the driver to select resumption or cancellation of the automatic parking control as illustrated inFIG.6, the automatic-parking control unit23may make notification by outputting voice prompting the driver to select resumption or cancellation of the automatic parking control from a speaker (not illustrated) included in the vehicle1. In this case, the driver' voice for selection and instruction may be recognized by a microphone (not illustrated) included in the vehicle1. Note thatFIG.1is a schematic diagram illustrating the configurations of the vehicle1and the vehicle control device10divided according to the main processes, to make it easy to understand the invention of the present application, and hence the configuration of the vehicle control device10may be divided differently. The processes of the constituents may be executed by one hardware unit or a plurality of hardware units. The processes of the constituents illustrated inFIGS.3to5may be executed by one program or a plurality of programs. 4. Configurations Supported by Above Embodiments The above embodiments are specific examples of the configurations described below. (Configuration 1) A vehicle control device that controls operation of a vehicle including a transmission having, as shift positions for the forward direction, a first forward position and a second forward position having a larger speed reduction ratio than the first forward position, the vehicle control device including: a shift-position-switching acceptance unit that accepts switching operation of the shift position of the transmission by a driver but does not accept switching operation from a shift position other than the first forward position and the second forward position to the second forward position without via the first forward position; and an automatic-parking control unit that executes automatic parking control of the vehicle and that, in a case of moving the vehicle forward during execution of the automatic parking control in a state in which the shift position of the transmission is set at a shift position other than the first forward position and the second forward position, switches the shift position of the transmission to the first forward position to move the vehicle forward. With the vehicle control device according to configuration 1, it is possible to prevent switching of the shift position that gives the user of the vehicle the sense of incongruity during automatic parking. (Configuration 2) The vehicle control device according to configuration 1, in which when the shift-position-switching acceptance unit accepts switching operation to the second forward position during execution of the automatic parking control, the automatic-parking control unit discontinues the automatic parking control, makes a notification to prompt the driver to select cancellation or resumption of the automatic parking control, and upon recognizing the driver's instruction for cancellation or resumption of the automatic parking control, cancels or resumes the automatic parking control with the shift position of the transmission set at the second forward position. With the vehicle control device according to configuration 2, in the case in which the automatic parking control is discontinued during execution of the automatic parking control by the driver's switching operation to the second forward position, it is inferred that the driver has a clear intention of switching to the second forward position. Accordingly, the automatic parking control is canceled or resumed with the shift position of the transmission at the second forward position, and thus, it is possible to perform shift-position switching reflecting the driver's intention. (Configuration 3) The vehicle control device according to configuration 1 or 2, in which the shift-position-switching acceptance unit accepts switching operation to the first forward position and the second forward position according to the driver's operation of a single operation element, and when the single operation element is operated within a specified time from the time point when the shift position of the transmission is switched from a reverse position to the first forward position during execution of the automatic parking control, the automatic-parking control unit keeps the shift position of the transmission at the first forward position. With the vehicle control device according to configuration 3, in the case in which because the single operation element was operated just after the shift position was switched from the reverse position to the first forward position, and thus, it is inferred that the driver operated the single operation element, intending to switch to the first forward position in the state of the reverse position, it is possible to switch to the first forward position according to the driver's intention. REFERENCE SIGNS LIST 1vehicle10vehicle control device20processor21shift-position-switching acceptance unit22driving control unit23automatic-parking control unit30memory31control program40surrounding camera41a set of sonars42speed sensor43steering-angle sensor44brake-pedal sensor45accelerator-pedal sensor46shift switch47automatic parking switch48display60steering unit70drive unit80brake unit90confirmation screen110empty slot | 22,904 |
11858500 | EXPLANATION OF REFERENCE 1: VEHICLE TRAVELING CONTROLLER2: OWN VEHICLE DETECTION SENSOR3: ENVIRONMENTAL SITUATION ACQUISITION SECTION4: ECU5: TRAVELING OUTPUT SECTION41: MAP DATABASE42: OWN VEHICLE ROUTE GENERATION SECTION43: ROUTE DANGER DEGREE EVALUATION SECTION44: RULE OBSERVANCE CONTROL SECTION45: RULE PRIORITY STORAGE SECTION46: TOLERABLE DANGER DEGREE STORAGE SECTION BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, an embodiment of the invention will be described in detail with reference to the accompanying drawings. In the drawings, the same parts are represented by the same reference numerals, and overlap description will be omitted. FIG.1is a diagram schematically showing the configuration of a vehicle traveling controller according to an embodiment of the invention. As shown inFIG.1, a vehicle traveling controller1of this embodiment is installed in a vehicle to perform traveling control of the vehicle and is used for automatic driving of the vehicle, for example. The vehicle traveling controller1includes an own vehicle detection sensor2and an environmental situation acquisition section3. The own vehicle detection sensor2is a detection sensor for acquiring position information, vehicle speed information, and the like of the own vehicle. As the own vehicle detection sensor2, for example, a GPS (Global Positioning System) or a wheel speed sensor is used. The position information of the own vehicle can be acquired by the GPS (Global Positioning System), and the vehicle speed information can be acquired by the wheel speed sensor. The environmental situation acquisition section3functions as environmental situation acquisition means for acquiring environmental information around the own vehicle. As the environmental situation acquisition section3, for example, an inter-vehicle communication device, a road-vehicle communication device, a radar sensor using millimeter waves or laser, and the like are used. When the inter-vehicle communication device and the road-vehicle communication are used, position information and vehicle speed information of other vehicles can be acquired. When a millimeter-wave radar sensor or the like is used, position information and relative speed information of other vehicles and obstacles on the road can be acquired. The vehicle traveling controller1includes an ECU (Electronic Control Unit)4. The ECU4performs overall control for the vehicle traveling controller1, and is composed of, for example, a computer including a CPU, a ROM, and a RAM. The ECU4is connected to the own vehicle detection sensor2and the environmental situation acquisition section3, and receives and stores own vehicle information, other vehicle information, and the like acquired by the own vehicle detection sensor2and the environmental situation acquisition section3. The ECU4functions as a normal traveling control unit which instructs a vehicle to travel in accordance with a normal traveling rule on the basis of the situation of the own vehicle and the environmental situation around the own vehicle. The ECU4also functions as a traveling propriety determination unit which determines whether the vehicle traveling in accordance with the normal traveling rule becomes proper traveling or not on the basis of environmental information around the vehicle. The ECU4also functions as an emergency evacuation traveling control unit which, when it is determined that the vehicle traveling in accordance with the normal traveling rule does not become proper traveling, instructs the vehicle to perform emergency evacuation traveling which is not in accordance with the normal traveling rule. The normal traveling rule corresponds to, for example, a normal traffic rule, and traffic rules and regulations defined by the road traffic law are set. As the normal traveling rule, rules regarding vehicle traveling other than the traffic rules may be set. In this embodiment, description will be made for a case where the normal traffic rules are used as the normal traveling rule. The ECU4includes a map database41, an own vehicle route generation section42, a route danger degree evaluation section43, a rule observance control section44, a rule priority storage section45, and a tolerable danger degree storage section46. The map database41is a database for storing map information of a road as a course for vehicle traveling. The own vehicle route generation section42generates a route of the own vehicle on the basis of position information of the vehicle, map information, and the like. For example, own vehicle route generation section42reads a travelable area around the current position of the own vehicle on the basis of position information of the own vehicle and map information around the own vehicle, generates an operation sequence of the own vehicle for a predetermined time (for example, several seconds) after, and acquires one route candidate. This route candidate acquisition processing is repeated a predetermined number of times to generate a plurality of route candidates, and the plurality of route candidates are output as a route candidate signal. The route danger degree evaluation section43evaluates the degree of danger in the route of the own vehicle. For example, the route danger degree evaluation section43receives environmental information around the own vehicle, calculates a peripheral mobile object route distribution of other mobile objects, such as other vehicles and the like, around the own vehicle from the current time to a predetermined time after on the basis of the environmental information, extracts one route candidate from the route candidate signal, and calculates the degree of danger of a route candidate from the route candidate and the peripheral mobile object route distribution. For all the route candidates, the degree of danger is calculated, and a route candidate group signal with danger degree evaluation obtained by appending the degrees of danger to the respective route is output. The rule observance control section44decides a route such that the vehicle observes the set traffic rules. For example, the rule observance control section44receives the route candidate group signal with danger degree evaluation output from the route danger degree evaluation section43, and turns on determination flags of all traffic rules of a traffic rule table. As shown inFIG.2, the traffic rule table is a table in which priority is placed on a plurality of traffic rules, and includes a determination flag for determining rule observance. The determination flag is turned on for rules that should be observed, and turned off for rules that are not observed. As the traffic rules include, for example, observance of signal indication, observance of slow at place with blocked view, observance of stop, observance of do not enter, observance of stop with space on roadside, observance of no stray, observance of minimum speed, and observance of speed limit are set. The rule observance control section44extracts respective route candidate information from the route candidate group signal, and determines whether or not the traffic rule table with the determination flag turned on is observed. Then, processing is performed for all the route candidates for putting the relevant route candidate in an output route candidate buffer when all the traffic rules with the determination flag turned on are observed, or for discarding the relevant route candidate when all the traffic rules are not observed. Presence/absence of route candidates in the output route candidate buffer is confirmed, and when there are route candidates in the output route candidate buffer, the degree of danger in a route candidate with the minimum degree of danger from among the route candidates is compared with the tolerable degree of danger stored in the tolerable danger degree storage section46. When the degree of danger in the route candidate with the minimum degree of danger is smaller than the tolerable degree of danger, the route candidate with the minimum degree of danger is set as a traveling route of the vehicle, and a control signal is output to the traveling output section5such that the vehicle travels along the route. When the degree of danger in the route with the minimum degree of danger is not smaller than the tolerable degree of danger, and when there is no route candidate in the output route candidate buffer, the priority of the traffic rule is read from the rule priority recording section45. Then, when there is no traffic rule with the determination flag turned on, the route candidate with the minimum degree of danger is set as a traveling candidate of the vehicle, and a control signal is output to the traveling output section5such that the vehicle travels along the route. When there are traffic rules with the determination flag turned on, the determination flag of a traffic rule with lowest priority from among the traffic rules with the determination flag turned on is turned off, and processing is repeatedly performed again for the route candidates for determining whether or not the traffic rule table with the determination flag turned on is observed. The own vehicle route generation section42, the route danger degree evaluation section43, and the rule observance control section44provided in the ECU4may be implemented by installing a program on a computer or may be implemented by individual hardware. As shown inFIG.1, a traveling output section5is connected to the ECU4. The traveling output section5performs driving/traveling of the vehicle, for example, traveling drive, braking, and steering operations in response to the control signal of the ECU4. The traveling output section5corresponds to, for example, traveling drive ECU, braking ECU, steering ECU, and the like. Next, an operation of the vehicle traveling controller1of this embodiment will be described. FIG.3is a flowchart showing an operation of the vehicle traveling controller1of this embodiment. For example, control processing ofFIG.3is repeatedly executed by the ECU4in a predetermined cycle. With regard to the operation of the vehicle traveling controller1, first, as shown in S10ofFIG.3, own vehicle information acquisition processing is performed. The own vehicle information acquisition processing is processing for acquiring position information and vehicle speed information of the own vehicle, and is performed, for example, on the basis of an output signal of the own vehicle detection sensor2. With this own vehicle information acquisition processing, the current position, vehicle speed, and traveling direction of the own vehicle can be specified. The process progresses to S12, and environmental information acquisition processing is performed. The environmental information acquisition processing is processing for acquiring environmental information around the own vehicle, and is performed, for example, on the basis of an output signal of the environmental situation acquisition section3. With this environmental information acquisition processing, the positions, movement speed, movement directions, and the like of other vehicles, other mobile objects, and stationary objects can be specified. The process progresses to S14, and own vehicle route generation processing is performed. The own vehicle route generation processing is processing for generating a route of the own vehicle on the basis of position information of the vehicle, map information, and the like. For example, a travelable area around the current position of the own vehicle is read on the basis of the position information of the own vehicle and the map information around the own vehicle, and a route candidate of the own vehicle to a predetermined time after is generated. The process progresses to S16, and route danger degree evaluation processing is performed. The route danger degree evaluation processing is processing for evaluating the degree of danger in vehicle traveling of a route candidate generated in S14. For example, a peripheral mobile object route distribution of mobile objects, such as other vehicles or the like around the own vehicle, from the current time to a predetermined time after is calculated on the basis of the environmental information around the own vehicle, and the degree of danger of a route candidate is calculated from the route candidate of the own vehicle and the peripheral mobile object route distribution. In this case, it is preferable that stationary objects, such as an obstacle and the like, as well as mobile objects around the own vehicle are taken into consideration. By calculating the degree of danger of the route candidate from the position of the stationary objects and the route candidate of the own vehicle, dangerousness of collision against the obstacle on the route can be also evaluated. The process progresses to S18, and it is determined whether or not the degree of danger in vehicle traveling is equal to or smaller than a predetermined value. For example, it is determined whether the vehicle traveling in accordance with the normal traffic rule becomes proper traveling or not on the basis of the environmental situation, such as the traveling states of other vehicles around the own vehicle, presence/absence of an obstacle on the route, and the like. In this case, when a plurality of normal traffic rules are set, it may be determined whether the vehicle traveling in accordance with all of the plurality of normal traffic rules becomes proper traveling. When it is determined in S18that the vehicle traveling in accordance with the normal traffic rule becomes proper traveling, normal traveling control processing is performed (S20). The normal traveling control processing is processing for instructing the vehicle to perform vehicle traveling in accordance with the normal traffic rule, and is performed in response to a control signal for instructing the vehicle to perform vehicle traveling in accordance with the normal traffic rule output from the ECU4to the traveling output section5. When it is determined in S18that the vehicle traveling in accordance with the normal traffic rule does not become proper traveling, emergency evacuation traveling control processing is performed (S22). The emergency evacuation traveling control processing is processing for instructing the vehicle to perform emergency evacuation traveling which is not in accordance with the normal traffic rule, and is performed in response to a control signal for instructing the vehicle to perform emergency evacuation traveling, which is not in accordance with the normal traffic rule, output from the ECU4to the traveling output section5. For example, when a plurality of normal traffic rules are set, if vehicle traveling in accordance with all of the plurality of normal traffic rules does not become proper traveling, the emergency evacuation traveling control processing is executed. In this case, it is preferable to instruct the vehicle to perform emergency evacuation traveling which is not in accordance with traffic rules with low priority from among the plurality of traffic rules. That is, it is possible to prevent emergency evacuation traveling which is not in accordance with all of the normal traffic rules from being performed, so the vehicle can perform safe vehicle traveling while observing a traveling rule with high priority. The emergency evacuation traveling control processing will be described specifically. For example, as shown inFIG.4, when an own vehicle80is traveling on a road with opposing two lanes, if a vehicle82coming from the opposite direction is traveling on the own lane so as to pass a preceding vehicle81coming from the opposite direction, the own vehicle80is hard to perform safe vehicle traveling in accordance with the normal traffic rule since the vehicles81and82coming from the opposite direction are traveling side by side. Even though the vehicle80stops traveling, it is difficult to avoid danger. In this case, the vehicle traveling controller1executes emergency evacuation traveling which is not in accordance with the normal traffic rule, observance of do not enter. That is, the own vehicle80is instructed to travel a side road91that a vehicle is prohibited from entering. Thus, collision against the vehicle82coming from the opposite direction can be avoided, and the safety of vehicle traveling in the case of an emergency can be increased. When emergency evacuation traveling which is not in accordance with the normal traffic rule is performed, it is preferable to confirm whether or not there is no dangerousness when the emergency evacuation traveling is performed. For example, it is preferable that, after it is confirmed that no vehicle is on the side road91, the own vehicle80is instructed to travel the side road91. Therefore, the safety of vehicle traveling in the case of an emergency can be further increased. As shown inFIG.5, when the own vehicle80is traveling on the road with opposing two lanes, a vehicle83is rushed out from the side road92, the own vehicle80inevitably collides against the vehicle83when still traveling on the own lane. Even though the vehicle80is braking with full braking, it is difficult to avoid danger. In this case, the vehicle traveling controller1executes emergency evacuation traveling which is not in accordance with the normal traffic rule, observance of no passing. That is, the own vehicle80is instructed to pass the opposing lane. Thus, contact with and collision against the vehicle83can be avoided, and the safety of vehicle traveling in the case of an emergency can be increased. When emergency evacuation traveling which is not in accordance with the normal traffic rule is performed, it is preferable to confirm whether or not there is no dangerousness when the emergency evacuation traveling is performed. For example, it is preferable that, after it is confirmed that there is no vehicle coming from the opposite direction on the opposing lane, the own vehicle80is instructed to pass the opposing lane. Therefore, the safety of vehicle traveling in the case of an emergency can be further increased. As shown inFIG.6, when the own vehicle80is traveling on a highway, if a fallen object93is in front of the own vehicle80, and a vehicle84is traveling in parallel to the own vehicle80, the own vehicle80cannot change a lane, so the own vehicle80is hard to avoid collision against the fallen object93. Further, there is a minimum speed limit, so in the case of traveling in accordance with the normal traffic rule, the vehicle80cannot stop traveling and it is difficult to avoid danger by stopping. In this case, the vehicle traveling controller1executes emergency evacuation traveling which is not in accordance with the normal traffic rule, observance of minimum speed. That is, the own vehicle80is instructed to reduce the speed so as not to collide against the fallen object93or the own vehicle80is instructed to stop traveling. Therefore, collision against the fallen object93can be avoided, and the safety of vehicle traveling in the case of an emergency can be increased. When emergency evacuation traveling which is not in accordance with the normal traffic rule is performed, it is preferable to confirm whether or not there is no dangerousness when the emergency evacuation traveling is performed. For example, it is preferable that, after it is confirmed that there is no succeeding vehicle or there is a sufficient distance from the succeeding vehicle, the own vehicle80is instructed to reduce the speed or to stop. Therefore, the safety of vehicle traveling in the case of an emergency can be further increased. As shown inFIG.7, when the own vehicle80is traveling on the road with opposing two lanes, if a fallen object93is in front of the own vehicle80, the own vehicle80cannot change a lane, so the own vehicle80is hard to avoid collision against the fallen object93. In this case, the vehicle traveling controller1executes emergency evacuation traveling which is not in accordance with the normal traffic rule, observance of minimum speed. That is, the vehicle80is instructed to pass the opposing lane. Thus, collision against the fallen object93can be avoided, and the safety of vehicle traveling in the case of an emergency can be increased. When emergency evacuation traveling which is not in accordance with the normal traffic rule is performed, it is preferable to confirm whether or not there is no dangerousness when the emergency evacuation traveling is performed. For example, it is preferable that, after it is confirmed that there is no vehicle coming from the opposite direction on the opposing lane, the own vehicle80is instructed to pass the opposing lane. Therefore, the safety of vehicle traveling in the case of an emergency can be further increased. The emergency evacuation traveling is performed for the purpose of emergency evacuation in the case of vehicle traveling which is not in accordance with the normal traffic rules individually defined, specifically, is performed for the purposes of preventing danger in vehicle traveling, improving traffic safety, and preventing disturbance due to the road traffic under the road traffic law. After the processing of S20or S22ofFIG.1ends, a sequence of control processing ends. As described above, according to the vehicle traveling controller1of this embodiment, when vehicle traveling in accordance with the normal traffic rule does not become proper traveling, the vehicle is instructed to perform emergency evacuation traveling which is not in accordance with the normal traffic rule, emergency evacuation traveling with high safety can be performed, as compared with vehicle traveling in accordance with the normal traveling rule. Therefore, the safety of vehicle traveling can be improved. According to the vehicle traveling controller1of this embodiment, when it is determined that the vehicle traveling in accordance with all of a plurality of traffic rules does not become proper traveling, emergency evacuation traveling which is not in accordance with traffic rules with low priority from among the plurality of normal traffic rules is performed, so the vehicle can perform safe vehicle traveling while observing a traffic rule with high priority. In this embodiment, an example of the vehicle traveling controller according to the invention has been described. Thus, the vehicle traveling controller according to the invention is not limited to the example but may be modified or applied to others so as not to change the gist of the invention described in the appended claims. Although in this embodiment, a case where a normal traffic rule is set as the normal traveling rule, a rule of a vehicle traveling state may be used as the normal traveling rule. Further, a rule for limiting abrupt acceleration equal to or more than a set value or a rule for limiting abrupt steering equal to or more than a set value may be used as the normal traveling rule. INDUSTRIAL APPLICABILITY According to the invention, emergency evacuation traveling is performed in the case of an emergency, so traveling control with high safety is performed. | 23,205 |
11858501 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily carry out the embodiments. The present invention may, however, be embodied in many different forms, and should not be construed as being limited to the embodiments set forth herein. In the drawings, parts irrelevant to the description of embodiments of the present invention will be omitted for clarity. Like reference numerals refer to like elements throughout the specification. Throughout the specification, when a certain part “includes” or “comprises” a certain component, this indicates that other components are not excluded, and may be further included unless otherwise noted. The same reference numerals used throughout the specification refer to the same constituent elements. Before explaining a vehicle and a method of controlling the same according to embodiments of the present invention, the structure and control system of a hybrid electric vehicle will be first described as an example of vehicles to which embodiments are applicable. Of course, except for parts peculiar to a hybrid electric vehicle, the embodiments can also apply to general vehicles equipped with internal combustion engines, as well as motorized vehicles such as electric vehicles (EVs) or fuel cell electric vehicles (FCEVs), other than hybrid electric vehicles. FIG.2is a diagram showing an example of the structure of a powertrain of a parallel-type hybrid electric vehicle to which embodiments of the present invention are applicable. Referring toFIG.2, the powertrain of the hybrid electric vehicle employs a parallel-type hybrid system in which a drive motor140and an engine clutch (EC)130are disposed between an internal combustion engine (ICE)110and a transmission150. In such a vehicle, when a driver steps on an accelerator pedal after starting the vehicle, the motor140is first driven using the power of a battery in the state in which the engine clutch130is open, and then the power of the motor140is transmitted to the wheels via the transmission150and a final drive (FD)160in order to rotate the wheels (i.e. EV mode). When greater power is needed as the vehicle is gradually accelerated, a starter/generator motor120operates to drive the engine110. When the rotational speeds of the engine110and the motor140become equal, the engine clutch130becomes locked, with the result that both the engine110and the motor140, or only the engine110, drives the vehicle (i.e. transitioning from an EV mode to an HEV mode). When a predetermined engine OFF condition is satisfied, for example, when the vehicle decelerates, the engine clutch130becomes open, and the engine110is stopped (i.e. transitioning from the HEV mode to the EV mode). In addition, when the hybrid electric vehicle brakes, the power of the wheels is converted into electrical energy, and the battery is charged with the electrical energy, which is referred to as recovery of braking energy or regenerative braking. The starter/generator motor120serves as a starter motor when the engine is started, and operates as a generator when the rotational energy of the engine is collected after the engine is started or when the engine is turned off. Therefore, the starter/generator motor120may be referred to as a “hybrid starter generator (HSG)”, or may also be referred to as an “auxiliary motor” in some cases. The driving mode of the hybrid electric vehicle will be described below in detail based on the above-described structure. The EV mode is mainly used in a situation in which a vehicle speed is low and required torque is low, and in the EV mode, the engine clutch130is opened and torque is transferred to the wheels using only the motor140as a power source. The HEV mode is mainly used in a situation in which a vehicle speed is high and required torque is high, utilizes the engine110and the motor140as a power source, and may be subdivided into an HEV series mode and an HEV parallel mode. In the HEV series mode, the engine clutch130is opened (i.e. connection between the engine110and the drive shaft is interrupted), the power of the engine110is used to generate electrical energy by the HSG120, and only the motor140directly generates power. On the other hand, in the HEV parallel mode, the engine clutch130is locked, with the result that both the power of the engine110and the power of the motor140are transferred to the wheels. FIG.3is a block diagram showing an example of the control system of the hybrid electric vehicle to which embodiments of the present invention are applicable. Referring toFIG.3, in the hybrid electric vehicle to which embodiments of the present invention are applicable, the internal combustion engine110may be controlled by an engine control unit210. The torque of the starter/generator motor120and the drive motor140may be controlled by a motor control unit (MCU)220. The engine clutch130may be controlled by a clutch control unit230. Here, the engine control unit210is also referred to as an engine management system (EMS). In addition, the transmission150is controlled by a transmission control unit250. Each of the control units may be connected to a hybrid control unit (HCU)240, which is an upper-level control unit that controls the overall process of mode switching, and may provide information necessary for engine clutch control at the time of switching the driving mode or shifting gears and/or information necessary for engine stop control to the hybrid control unit240, or may perform an operation in response to a control signal under the control of the hybrid control unit240. For example, the hybrid control unit240determines whether to perform mode switching between the EV mode and the HEV mode depending on the travel state of the vehicle. To this end, the hybrid control unit determines an open time of the engine clutch130and controls hydraulic pressure (in the case of a wet engine clutch) or controls torque capacity (in the case of a dry engine clutch) when the engine clutch is opened. In addition, the hybrid control unit240may determine the state of the engine clutch130(lock-up, slip, open, etc.), and may control the time at which to stop injecting fuel into the engine110. In addition, the hybrid control unit may transmit a torque command for controlling the torque of the starter/generator motor120to the motor control unit220in order to control stopping of the engine, thereby controlling recovery of the rotational energy of the engine. In addition, the hybrid control unit240may control the lower-level control units so as to determine the mode-switching condition and perform mode switching at the time of performing driving-mode-switching control. Of course, it will be apparent to those skilled in the art that the connection relationships between the control units and the functions/division of the control units described above are merely illustrative, and are not limited by the names thereof. For example, the hybrid control unit240may be implemented such that the function thereof is provided by any one of the control units other than the hybrid control unit240or such that the function thereof is distributed and provided by two or more of the other control units. The terms “unit” and “control unit” forming part of the names of the motor control unit (MCU) and the hybrid control unit (HCU) are merely terms that are widely used in the naming of a controller for controlling a specific function of a vehicle, and should not be construed as meaning a generic function unit. For example, in order to control the function peculiar thereto, each control unit may include a communication device, which communicates with other control units or sensors, a memory, which stores therein an operating system, logic commands, and input/output information, and one or more processors, which perform determinations, calculations, and decisions necessary for control of the function peculiar thereto. The above-described configuration inFIGS.2and3is merely an exemplary configuration of a hybrid electric vehicle. It will be apparent to those skilled in the art that the hybrid electric vehicle to which embodiments of the present invention are applicable is not limited to having the above-described configuration. Hereinafter, steering-based emergency braking function control according to embodiments of the present invention will be described based on the above-described configuration of the hybrid electric vehicle. An embodiment of the present invention proposes technology for controlling, when an obstacle is present ahead of a host vehicle, activation of an emergency braking function or an activation condition according to a turning path based on the distance to the forward obstacle and the steering angle. The configuration of a control device for implementing the above embodiment will be described below with reference toFIG.4. FIG.4is a diagram showing an example of the configuration of an emergency braking entry control device according to an embodiment of the present invention. Referring toFIG.4, an emergency braking entry control device300according to an embodiment may include a determiner310and a controller320. The determiner310may include an entry condition determiner311, an obstacle determiner312, and a steering angle determiner313. The controller320may include an emergency braking OFF controller321, an emergency braking entry distance changer322, and a powertrain mode controller323. Hereinafter, the operation of the components of the emergency braking entry control device300will be described in more detail. The determiner310may receive information on whether the hybrid electric vehicle is ready for travel (i.e. HEV Ready, which corresponds to “IG on” of a general vehicle), information on the vehicle speed, information on the currently selected gear stage (P, R, N, D, etc.), information on the heading of an object located on the travel path of the vehicle (i.e. ahead of the vehicle) and the distance to the object, and information on the steering angle according to manipulation of the steering wheel. The information on the currently selected gear stage may be acquired from the transmission control unit250. The information on the heading of the obstacle and the distance thereto may be acquired through an obstacle detection device, for example, a sensor capable of detecting a distance, such as a vision sensor, a radar sensor, a LiDAR sensor, or an ultrasonic sensor, or through a control unit controlling the obstacle detection device, e.g. an advanced driver assistance system (ADAS) control unit. The information on the vehicle speed may be transmitted from a wheel speed sensor. The information on the steering angle may be acquired from the steering control unit. However, the embodiments are not limited thereto. The entry condition determiner311may determine emergency braking control entry according to the embodiment when the driver manipulates the accelerator pedal in the situation in which the current state of the vehicle is “HEV Ready”, in which a gear stage (i.e. the D-range or the R-range) is locked so that the vehicle travels in one direction, and in which the distance to an object present on the travel path of the vehicle is less than a predetermined distance Dthr. The entry condition determiner311determines whether to enter a mode of controlling the emergency braking function depending on whether control entry conditions are satisfied. The control entry conditions are as follows:1) HEV Ready (EV Ready or IG On is also possible depending on the powertrain)2) D-range3) Detection of obstacle ahead In summary, the control entry conditions can be determined to be satisfied when the vehicle detects an obstacle ahead in the state of being capable of traveling in a forward direction using the power of the power source. The obstacle determiner312determines whether to enter a mode of controlling the emergency braking function based on the position of the obstacle with respect to the travel direction of the vehicle according to the steering angle. For example, in the situation shown inFIG.1, the host vehicle10is steered to the right, and the forward object20is present only in the area to the left and front of the host vehicle10with respect to the travel direction (i.e. the obliquely right-upward direction in the drawing) determined by steering manipulation, and no object is present in either the area directly ahead of or the area to the right and front of the host vehicle10. In the case in which the host vehicle is steered to the right, the obstacle determiner312may determine control entry when no obstacle is present in either the area directly ahead of or the area to the right of the host vehicle with respect to the travel direction determined by steering manipulation. On the other hand, in the case in which the host vehicle is steered to the left, the obstacle determiner312may determine control entry when no obstacle is present in either the area directly ahead of or the area to the left of the host vehicle with respect to the travel direction determined by steering manipulation. This means that the obstacle determiner312determines control entry when no obstacles other than the forward object20are present on the travel path corresponding to the input steering angle. The steering angle determiner313may determine whether to enter a mode of controlling the emergency braking function and the type of control based on the steering angle and the distance to the forward obstacle. The steering angle determiner313may determine control entry when the steering angle is larger than a collision steering angle. The steering angle determiner313may determine the type of control to be “cautious turning” under the condition of “safe steering angle>steering angle>collision steering angle”, and may determine the type of control to be “safe turning” under the condition of “steering angle>safe steering angle”. Here, the collision steering angle is the maximum steering angle at which the host vehicle and the forward obstacle collide with each other, and the safe steering angle is a steering angle at which the host vehicle and the forward obstacle travel while maintaining the minimum safe distance (e.g. a inFIG.6, which will be described later) or more therebetween. The collision steering angle and the safe steering angle may be calculated using the Ackerman geometry model, which is widely used for modeling of turning of a vehicle according to steering manipulation. FIG.5is a diagram showing an example of a geometric model applied when a vehicle turns at a low speed. Referring toFIG.5, a predicted travel path of the vehicle according to the steering angle input by the driver may be obtained through the Ackerman geometry model. In the Ackerman geometry model, when the predicted travel path of the vehicle is a circular turning path, the radius R of circular turning may be determined based on the steering angle σoof the outer wheel for turning, as shown in Equation 1 below. σo≅L(R+T2)Equation1 In Equation 1 above, R represents the radius of circular turning, T represents the tread (or track) of the vehicle, and L represents the wheelbase of the vehicle. Here, R is the distance from the center of circular turning to the center of the tread, and thus the substantial turning radius used for determination of the possibility of a collision with an obstacle present outside the turning direction during turning is the distance from the center of circular turning to the outer wheel, which is equivalent to the sum of T/2 and R. The predicted turning path according to the steering angle may be obtained through calculation of the radius using Equation 1 above. FIG.6is a diagram for explaining the collision steering angle and the safe steering angle according to an embodiment of the present invention. Referring toFIG.6, the steering angle at which the host vehicle10moves a distance D, which is the distance to the forward obstacle20, in the y-axis direction while moving a distance equivalent to the tread T thereof in the x-axis direction may be obtained as the collision steering angle. In addition, the steering angle at which the host vehicle10moves a distance equivalent to the value D-α in the y-axis direction while moving a distance equivalent to the tread T thereof in the x-axis direction may be obtained as the safe steering angle. Here, α is the minimum safe distance between the forward obstacle20and the body of the host vehicle10during turning (or the outer wheel of the host vehicle10during turning). This minimum safe distance may be set through experimentation, and may be generally set to 1 m. However, the embodiments are not limited thereto. Referring again toFIG.4, all of the entry condition determiner311, the obstacle determiner312, and the steering angle determiner313may determine control entry (On), and when the steering angle determiner313determines the type of control (cautious turning or safe turning), the determiner310may transmit determination as to whether to enter the control mode (On/Off) and determination of the type of control to the controller320. When control entry is determined to be On and the type of control is determined to be safe turning, the emergency braking OFF controller321of the controller320may turn off the emergency braking function. When control entry is determined to be On and the type of control is determined to be cautious turning, the emergency braking entry distance changer322may change the distance to the forward obstacle at which the emergency braking function is activated to be shorter than a default distance. Accordingly, activation of the emergency braking function may be prevented unless the host vehicle approaches the forward obstacle to the extent that the host vehicle collides with the forward obstacle. When control entry is determined to be On, the powertrain mode controller323may select the powertrain mode depending on the type of control. For example, when the type of control is safe turning, the powertrain mode controller323may change the powertrain mode to the HEV series mode, in which the engine clutch130is opened and charging is performed on the HSG120, in order to enhance launch performance after lane change. Also, when the type of control is cautious turning, the powertrain mode controller323may change the powertrain mode to the EV mode, in which only the drive motor140is used, in order to ensure stable lane change. Of course, this mode change is merely illustrative, and the embodiments are not limited thereto. Hereinafter, a process of controlling activation of the emergency braking function described above will be described with reference toFIG.7. FIG.7is a flowchart showing an example of an emergency braking entry control process according to an embodiment of the present invention. Referring toFIG.7, when the current state of the vehicle is “HEV Ready” (Yes in S701), when the current gear stage is D (Yes in S702), and when an obstacle is present ahead (Yes in S703), the determiner310may determine the heading of the obstacle and the distance to the obstacle (S704). In addition, the determiner310may determine a collision steering angle and a safe steering angle based on the steering angle (i.e. the steering angle input according to manipulation of the steering wheel by the driver) and the distance to the obstacle (S705). Determination of the collision steering angle and the safe steering angle is performed in the same manner as that described above with reference toFIGS.5and6, thus a duplicate description thereof will be omitted. When the input steering angle is not larger than the collision steering angle (No in S706), a collision with the obstacle present ahead is expected. Thus, the determiner310does not perform control of the emergency braking function. On the other hand, when the input steering angle is larger than the collision steering angle (Yes in S706), the determiner310determines whether an obstacle is located in the direction of the input steering angle (S707). When an obstacle is located in the direction of the input steering angle (Yes in S707), the determiner310does not perform control of the emergency braking function. On the other hand, when no obstacle is located in the direction of the input steering angle (No in S707), the determiner310determines the control entry to be On, and compares the input steering angle and the safe steering angle with each other (S708) to determine the type of control. When the input steering angle is larger than the safe steering angle (Yes in S708), the determiner310may determine the type of control to be safe turning (S709A). Accordingly, the controller320may turn off the emergency braking function (S710A), and may set the powertrain mode to the HEV series mode (S711A). On the other hand, when the input steering angle is not larger than the safe steering angle (i.e. when the input steering angle is equal to or smaller than the safe steering angle) (No in S708), the determiner310may determine the type of control to be cautious turning (S709B). Accordingly, the controller320may reduce the reference distance at which the emergency braking function is activated (S710B), and may set the powertrain mode to the EV mode (S711B). Despite having been described above with reference to a hybrid electric vehicle, the emergency braking entry control device and process according to the embodiments can also apply to vehicles having powertrains different from that of the hybrid electric vehicle through appropriate modification. For example, in the case of a vehicle equipped with a single type of power source, such as a general internal combustion engine or a general motor, the powertrain mode controller323may be omitted from the configuration shown inFIG.4. Accordingly, steps S711A and S711B may also be omitted from the process shown inFIG.7. The present invention may be implemented as code that can be written on a computer-readable recording medium and thus read by a computer system. The computer-readable recording medium includes all kinds of recording devices in which data that may be read by a computer system are stored. Examples of the computer-readable recording medium include a Hard Disk Drive (HDD), a Solid-State Disk (SSD), a Silicon Disk Drive (SDD), Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disk ROM (CD-ROM), a magnetic tape, a floppy disc, and an optical data storage. As is apparent from the above description, a vehicle associated with at least one embodiment of the present invention, configured as described above, is capable of effectively preventing an emergency braking function from being unnecessarily activated in consideration of the distance to an obstacle present ahead and a driver's steering manipulation. In addition, when embodiments of the present invention are applied to environment-friendly vehicles, it is also possible to effectively control a powertrain mode in consideration of the distance to an obstacle present ahead and a driver's steering manipulation. However, the effects achievable through embodiments of the present invention are not limited to the above-mentioned effects, and other effects not mentioned herein will be clearly understood by those skilled in the art from the above description. It will be apparent to those skilled in the art that various changes in form and details may be made without departing from the spirit and essential characteristics of the invention set forth herein. Accordingly, the above detailed description is not intended to be construed to limit the invention in all aspects and is to be considered by way of example. The scope of the invention should be determined by reasonable interpretation of the appended claims and all equivalent modifications made without departing from the invention should be included in the following claims. | 24,081 |
11858502 | DESCRIPTION OF EMBODIMENT Hereinafter, a vehicle control device according to an embodiment of the present invention is described with reference to drawings. FIG.1is a block diagram showing the system configuration of a vehicle which includes a vehicle control device according to this embodiment. A vehicle control device100is a brake control device for reducing collision damage to a vehicle according to the embodiment. The vehicle control device100is mounted on a vehicle1(also referred to as an own automobile or an own vehicle hereinafter), and performs a traveling control in which a deceleration control of the vehicle1is included. To the vehicle control device100, a stereoscopic camera20which forms a vehicle external field recognition sensor, a brake control unit30, a power train control unit40, a meter control unit70and the like are connected via communication (for example, car area network (CAN)). The vehicle control device100is formed of a microcomputer which incorporates a CPU, a ROM, a RAM and the like. In this embodiment, the vehicle control device100is provided as a device which realizes avoidance or reduction collision damage by applying braking to the vehicle1by controlling brakes or the like. The vehicle control device100performs various arithmetic operations relating to a control of the vehicle1. The vehicle control device100stops an operation of the microcomputer when an ignition voltage of the vehicle1is lowered, and starts up the microcomputer when the ignition voltage of the vehicle1becomes again a start-up voltage threshold or more, and performs respective control processing. Accordingly, in a state where the ignition voltage is lowered, that is, in an engine stopped state, an operation of control processing is inhibited. The stereoscopic camera20is formed of a pair of left and right cameras each of which uses a solid imaging element such as a charge coupled devices (CCD), for example. The stereoscopic camera20is mounted in the vicinity of a ceiling of a cabin, and images a mode of a road ahead of a vehicle, an obstacle and the like. Stereoscopic image data of the obstacle imaged by the stereoscopic camera20is transferred to an image processing chip disposed in the stereoscopic camera20. The image processing chip acquires parallax information from an image (data), and calculates a distance between the own vehicle1and the obstacle ahead of the vehicle based on the acquired parallax information, and further, calculates a relative speed by differentiating the calculated distance corresponding to an elapsed time. Further, the image processing chip calculates a lateral position of the imaged obstacle with respect to the own vehicle1, and calculates a lateral speed by differentiating the lateral position corresponding to an elapsed time. Further, using the image processing chip, pattern matching is performed with respect to image data based on the shape and the size of the obstacle so that the obstacle is classified into a pedestrian, a bicycle, a vehicle, other stopped obstacles and the like. The distance between the obstacle and the own vehicle1calculated in this manner, the relative speed, the lateral position, the lateral speed, and a type of the obstacle are transmitted to the vehicle control device100via a CAN or the like. A rear stereoscopic camera25and side stereoscopic cameras27perform image processing and pattern matching in the same manner as the stereoscopic camera20, and acquire information on a stereoscopic object around the own vehicle. The acquired information is transmitted to the vehicle control device100via the CAN or the like. However, the rear stereoscopic camera25and the side stereoscopic cameras27differ from the stereoscopic camera20with respect to the installing direction such that the rear stereoscopic camera25is installed so as to image the road behind the vehicle1and a mode of the obstacle, and the side stereoscopic cameras27are installed so as to image the mode of roads on left and right sides of the own vehicle1and the obstacle. The brake control unit30is connected to the vehicle control device100, and performs deceleration (braking) of the vehicle1by generating a friction between a wheel and a brake60which is connected to the brake control unit30, specifically, by applying a pressure to a disk brake or a drum brake based on brake control information acquired from the vehicle control device100. Further, the brake control unit30is connected to a brake pedal61, a wheel speed sensor62, a yaw rate sensor64, a steering angle sensor65and an acceleration sensor66respectively. The brake control unit30performs the measurement of a driving situation of the vehicle1such as an own vehicle speed, and a driving operation situation of a driver such as a brake manipulation of the driver, a steering angle manipulation variable and the like, and transmits a result of the measurement to the vehicle control device100. The power train control unit40is connected to an engine41and a transmission42respectively. The power train control unit40measures an engine torque and a speed reduction ratio of the transmission using information acquired from the engine41and the transmission42, and transmits a result of the measurement to the vehicle control device100. The meter control unit70is connected to a display device71and a buzzer72respectively. The meter control unit70performs notification warning or the like through visual or audio sense of a driver by operating the display device71or the buzzer72in response to a notification request acquired from the vehicle control device100via communication. Next, control processing of the vehicle control device100is described with reference toFIG.2. FIG.2is a functional block diagram which illustrates control processing performed by the vehicle control device according to this embodiment. As shown in the drawing, the vehicle control device100according to this embodiment is formed of a communication data acquisition unit110, a communication data output unit120, a deflection estimation unit200, a travelability determination unit300, an emergency brake operation determination unit500, a collision warning determination unit700, and a braking control unit800. The braking control unit800includes a deceleration limit releasing calculation unit400and an emergency brake deceleration calculation unit600. To describe the respective functional blocks of the vehicle control device100schematically, the communication data acquisition unit110acquires: a type of an obstacle, a longitudinal direction of the obstacle and the own vehicle1, a distance in the lateral direction and a relative speed acquired from the stereoscopic camera20and the rear stereoscopic camera25; and a driving situation of the vehicle1such as an own vehicle speed, a brake manipulation of a driver and the like acquired from the brake control unit30and the power train control unit40. Next, the deflection estimation unit200estimates whether the generation of acceleration/deceleration (particularly a brake force) with respect to the own vehicle1brings about the generation of deflection (a change in the advancing direction) using information acquired by the communication data acquisition unit110. The travelability determination unit300determines a non-travelable area and a travelable area of the own vehicle1(particularly an area disposed ahead of the own vehicle1on left and right sides) using information acquired by the communication data acquisition unit110in the same manner. Then, the deceleration limit releasing calculation unit400of the braking control unit800decides, based on the results acquired from the deflection estimation unit200and the travelability determination unit300, an upper limit of the vehicle deceleration generated on the own vehicle1(hereinafter, also referred to as a deceleration limit value or a limit deceleration) from a relationship between the travelable area and the deflection with respect to the vehicle deceleration. In this case, in a situation where the generation of the deceleration does not bring about the generation of deflection or in a situation where the generation of deflection brought about by the generation of deceleration does not bring about intrusion of the vehicle1into the non-travelable area, the upper limit of the vehicle deceleration is set to a non-limited value, for example, a large value such as 15 [m/s2]. On the other hand, when the generation of the vehicle deceleration momentarily brings about the generation of large deflection of the vehicle1and the travelable area is narrow, the upper limit of the vehicle deceleration is set to a value which is liable to be easily limited, for example, a small value such as 6 [m/s2]. Further, the limit amount (value) is updated at a short cycle, for example, at an interval of 50 ms. By setting the limit amount (value) to a large value such that the shorter the distance between the obstacle and the own vehicle1so that an area where the deflection is generated between the obstacle and the own vehicle1is narrowed, the smaller the restriction applied to the vehicle deceleration becomes. By setting the limit amount (value) in this manner, it is possible to apply a large deceleration force while suppressing a deflection amount. Next, the emergency brake operation determination unit500determines whether the position and a speed relationship which have a possibility of generating a collision between the obstacle and the own vehicle1are established using information acquired by the communication data acquisition unit110, and determines that the emergency brake (also referred to as an automatic brake or collision damage alleviation brake) is to be operated in a situation where there is a possibility of the collision. Next, the emergency brake deceleration calculation unit600of the braking control unit800determines the presence or the absence of an operation of the emergency brake first. When it is determined that there is no operation of the emergency brake, the deceleration at the time of applying an emergency brake is set to a value which requires no deceleration. On the other hand, when there is the operation of the emergency brake, the emergency brake deceleration calculation unit600calculates a deceleration force (deceleration, deceleration start timing and the like) necessary for avoiding collision damage corresponding to a positional relationship and a speed relationship between an obstacle and the own vehicle1. When a collision cannot be avoided, the emergency brake deceleration calculation unit600calculates a deceleration force (deceleration, deceleration start timing and the like) which enables the reduction of collision damage. Further, at this point of time, the emergency brake deceleration calculation unit600limits the deceleration at the time of applying the emergency brake which is a calculation result by a deceleration limit value acquired from the deceleration limit releasing calculation unit400. In this embodiment, the combination of the deceleration limit releasing calculation unit400and the emergency brake deceleration calculation unit600described above is referred to as the braking control unit800. The braking control unit800controls braking (deceleration) of the vehicle1by changing the deceleration at a time of applying an emergency brake of the vehicle1or deceleration start timing based on travelability determination result acquired by the travelability determination unit300and a deflection estimation result acquired by the deflection estimation unit200. Next, the collision warning determination unit700determines whether an obstacle is at a position and has a speed relationship so that approaching of the obstacle to the own vehicle1is to be warned using information acquired by the communication data acquisition unit110. Then, the communication date output unit120converts deceleration and the like at the time of applying the emergency brake which is the result acquired by the emergency brake deceleration calculation unit600and warning information which is the result acquired by the collision warning determination unit700into data in conformity with a communication protocol of the vehicle1, for example, a format of CAN, and transmits the converted data to the CAN or the like of the vehicle1. In this manner, the vehicle control device100(the respective functional blocks of the vehicle control device100) performs the processing described above. In the vehicle control device100, as described in the flowchart shown inFIG.3, processing ranging from P110to P120are performed in order from P110. Further, the flowchart shown inFIG.3indicates that a vehicle control can be performed in conformity with an environment around the vehicle, a driving situation of the vehicle and a driving manipulation which change every moment by repeat performing the processing at a short cycle, for example, a cycle of 50 ms during a period that the microcomputer is being operated. Hereinafter, processing from P110to P120indicated in the flowchart shown inFIG.3are described in detail. (Communication Data Acquisition Processing P110) First, the communication data acquisition processing P110is described. In the communication data acquisition processing P110performed by the communication data acquisition unit110, a type of obstacle, a longitudinal direction of an obstacle and the own vehicle1, a distance in a lateral direction and a relative speed are acquired from the stereoscopic camera20and the rear stereoscopic camera25. A driving situation of the vehicle1such as an own vehicle speed and a driving manipulation situation such as a brake manipulation of a driver are acquired from the brake control unit30. Such acquired values are converted into data so that the values can be used in processing P200and succeeding processing. When a noise occurs in communication data so that the data is changed into abnormal data, error detection is applied to communication data by a circulating redundancy detection (CRT), parity, or a checksum, and the abnormal data is discarded so as to prevent the propagation of the abnormal data to the processing including processing200and the succeeding processing. Further, under a situation where information of a sensor takes an abnormally large value or an abnormally small value, a system is incorporated which prevents runaway of a control and the occurrence of abnormal processing by limiting the numerical value. (Deflection Estimation Processing P200) Next, the deflection estimation processing P200is described with reference toFIG.4. In the deflection estimation processing P200by the deflection estimation unit200, first, a weight of a loaded object on the vehicle1is estimated. For this estimation, a traveling resistance of the own vehicle1is estimated in processing P210. An estimated value of the traveling resistance can be calculated based on a speed of the own vehicle, the shape of the vehicle (air resistance characteristic), and a sum of air rolling resistance acquired from a width of a tire, a gradient resistance acquired from a road surface gradient, and a cornering resistance acquired based on the generation of lateral acceleration. Next, an own vehicle total weight is estimated in processing P220. The own vehicle total weight is a total value of a weight of the vehicle1and a weight of a loaded object. The loaded object indicates a load or occupants including a driver. An estimated weight of the own vehicle1is acquired based on an engine torque, a speed reduction ratio of a transmission, an estimated value of a traveling resistance, a dynamic radius of a tire and a longitudinal direction acceleration of the own vehicle (hereinafter, simply described as own vehicle acceleration). Own vehicle acceleration used in this processing is calculated from a change amount of an own vehicle speed for every cycle which is acquired based on a value of the wheel speed sensor62. For example, the estimated weight of the own vehicle (the own vehicle total weight) is expressed by a following formula (1). Own vehicle total weight=(engine torque×speed reduction ratio÷tire dynamic radius−traveling resistance estimated value)÷own vehicle acceleration (1) There is a tendency that values of the engine torque, the speed reduction ratio, the traveling resistance estimated value, and the own vehicle acceleration are largely changed due to the deviation of measurement timing or the generation of a sensor noise. Accordingly, a sudden change of the own vehicle total weight is prevented by performing primary delay filter processing after the own vehicle total weight is acquired using the formula (1). Further, in the following scenes a to e, the estimation of the weight cannot be accurately performed using the formula (1) and hence, it is preferable to provide a condition which stops updating of the own vehicle total weight.(scene a) a case where an engine torque is small(scene b) a case where a gear position is not at an advancing position (drive range)(scene c) a case where a steering angle is largely inclined either to a left side or a right side.(scene d) a case where a deceleration force by a brake is generated(scene e) a case where a deceleration force is generated by an auxiliary brake Next, a load weight is acquired in processing P230. A weight of the vehicle excluding the weight of the loaded object can be acquired by storing the weight of the vehicle in the ROM as a parameter set in advance for every vehicle type based on vehicle data of a mounting vehicle type. Further, the weight of the loaded object can be acquired by the following formula (2) from the formula (1) and the weight of the vehicle read from the ROM. Load weight=own vehicle estimation weight−weight of vehicle (2) Next, the distributed weights applied to left and right suspensions of the vehicle1is estimated. In estimating the weight distribution, first, in processing P240, shrunken lengths of the left and right suspensions (left and right suspension shrunken lengths) are acquired from a roll angle of the own vehicle1. A method of acquiring a roll angle of the own vehicle1is described with reference toFIG.5. An image described in a PCT1inFIG.5an image of an area disposed ahead of the vehicle1when the image is imaged by the stereoscopic camera20in a case where the weight distribution is performed uniformly on left and right sides of the vehicle1. On the other hand, an image described in a PCT2inFIG.5is an image of the area disposed ahead of the vehicle1when the image is imaged by the stereoscopic camera20in a case where a load weight is offset to the right side of the vehicle1. Compared to the PCT1, in the PCT2, a right side of a screen is lowered with respect to a left side of the screen. A ground horizontal plane is extracted from the image by image processing. When the vehicle1has no roll angle, the position of the ground horizontal plane becomes the position on a line HL2. However, when a loading situation is offset to the right side so that a roll angle is generated in the vehicle1, the position of the ground horizontal plane becomes the position on a line HL1. Accordingly, the roll angle of the own vehicle1can be acquired by calculating an angle θ made by the line HL1and the line HL2. In this case, assuming an advancing direction of the vehicle1as a front side, the roll angle is expressed as a positive value when the left side is lowered, and is expressed as a negative value when the right side is lowered. In the case shown in the PCT2, when an absolute value of the angle θ is 5[deg], for example, the roll angle becomes −5[deg]. Due to pitching movement at a time of accelerating or decelerating the vehicle1at the point of time that a load is loaded on the own vehicle1, when the angle θ is acquired from one image, an error becomes large. Accordingly, updating of the roll angle is not performed at a timing that acceleration or deceleration of the own vehicle is small. Also in a case where, when vertical vibrations or a change in roll angle are generated due to unevenness of a road surface during traveling, when the angle θ is acquired from one image, an error becomes large. Accordingly, it is desirable that a change in the roll angle be made gentle by applying a primary delay filter or moving average processing to the roll angle. The processing which estimates the roll angle is performed in the stereoscopic camera20, and the calculated roll angle is transmitted to the vehicle control device100via communication, and is used in processing P240. In the same manner as the weight of the vehicle, a distance between the left and right tires (tread) is stored in the ROM as a parameter set in advance for every vehicle type based on vehicle data of a vehicle mounting type. In processing P240, the difference between shrunken lengths of the left and right suspensions (left and right suspension shrunken length difference) is acquired based on the roll angle and the tread. Specifically, the difference between the shrunken lengths of the left and right suspensions is acquired by the following formula (3) Left and right suspension shrunken length difference=tread×tan (roll angle) (3) When the right suspension is shrunken, the left and right suspension shrunken length difference takes a negative value, and the shrunken length of the right suspension becomes short by an amount of the left and right suspension shrunken length difference with respect to the shrunken length of the left suspension. For example, in a case where the left and right suspension shrunken length difference is −0.02[m], and the shrunken length of the left suspension is 0.05[m], the shrunken length of the right suspension becomes 0.05[m]−(−0.02[m])=0.07 m. Next, in processing P250, the weight difference which is generated on the left and right wheels is calculated based on the left and right suspension shrunken length difference. The left and right suspension shrunken length difference of the suspensions is proportional to the difference between the weights supported by the respective left and right suspensions, and characteristics of the suspensions depend on a mounting vehicle type. Accordingly, the relationship between the weight difference and the left and right suspension shrunken length difference is measured in advance by an experiment or the like, and the measured relationship is stored in the ROM in advance as a parameter of table values. Then, in processing P250, the left and right weight difference is acquired by looking up the table values using the left and right suspension shrunken length difference acquired by processing P240. Next, in processing P260, the weights which are uniformly distributed to the left and right wheels are calculated. The distributed weights on the left and right wheels are acquired by the following formula (4). Distributed weight=(own vehicle total weight−weight difference)÷2 (4) Next, in processing P270, a tilting direction of the own vehicle1(with respect to a road surface) is determined based on a roll angle, and which of the left and right wheels supports the weight difference is determined. In a case where the roll angle is larger than θ [deg], the weight is supported by the left wheel and hence, processing advances to processing P280where the left wheel weight is set to the distributed weight+the weight difference, and the right wheel weight is set to the distributed weight. In a case where the roll angle is less than θ [deg] in processing P270, the weight is supported by the right wheel and hence, processing advances to processing P290where the right wheel weight is set to the distributed weight+the weight difference, and the left wheel weight is set to the distributed weight. In this manner, the left wheel weight and the right wheel weight are estimated as a loaded state of the vehicle1as the deflection estimation result (that is, the result where the deflection of the vehicle1due to the generation of a brake force applied to the vehicle1is estimated), and the weight of the left wheel and the weight of the right wheel are estimated and the estimation is used in the following processing. (Travelability Determination Processing P300) Next, in describing travelability determination processing P300of the own vehicle1, the definition of a travelable area (or non-travelable area) is described with reference toFIG.6.FIG.6is a bird's eye view showing one example of a situation of the own vehicle1and surrounding of the own vehicle1for describing a travelable area of the own vehicle1. When the own vehicle1advances, the stopped vehicle2exists ahead of the own vehicle. In a case where sudden braking is applied to the own vehicle1, when the center of gravity of the own vehicle1is offset to either a left side or a right side, an area A1which is indicated by a hatched line is an area having a possibility that the own vehicle1advances to the area. With respect to the inside of such an area, it is estimated that stereoscopic objects which move such as an oncoming vehicle4, another vehicle3which travels in the same direction as the own vehicle1on a neighboring lane (hereinafter described as a neighboring vehicle3), a pedestrian5, and a bicycle or a motorcycle not shown in the drawing enter the area. Then, an area where it is assumed that these stereoscopic objects enter the area at a point of time (timing) at which the own vehicle1reaches the area, and an area where it is estimated that these stereoscopic objects already have existed and will exist until a point of time (timing) at which the own vehicle1will reach are determined to be a non-travelable area. For example, it is estimated that the oncoming vehicle4, the neighboring vehicle3and the pedestrian5move to positions (an oncoming vehicle4a, a neighboring vehicle3a, a pedestrian5a) shown inFIG.7after a lapse of a fixed time, and enter the area A1. An estimation method used here will be described later. Further, besides the objects which move, a guardrail6, a utility pole7, and stereoscopic objects which do not move such as a wall, blocks which separate a sideway and a vehicle road, and a road sign which are not shown in the drawing are also determined as a non-travelable area when these objects exist in the area indicated by A1. Further, with respect to the area A1, also in a case where a surface which is considerably lower than a traveling surface of the own vehicle1, that is, an area such as a cliff or a groove exists (in other words, the area where a road surface on which the own vehicle1is travelable does not exist), such an area is determined as a non-travelable area. The area A1which is not determined as the above-mentioned non-travelable area is determined as the travelable area. Further, when the travelable area is disposed remoter than the non-travelable area as viewed from the own vehicle1, the area is determined as the non-travelable area. The travelable area in the case shown inFIG.7becomes an area A2indicated by a hatched line inFIG.8. With respect the area indicated by A1, an area which does not overlap with the area indicated by A2is determined as the non-travelable area. Further, on software, in classifying the travelable area and the non-travelable area, as shown inFIG.9, an area disposed ahead of the own vehicle1, that is, a range of 100[m]×20[m] which has a size of 100[m] in a frontward direction and a size of 10[m] on the left and right directions respectively is determined as a two-dimensional array by dividing the range in a grid shape. In a case where the vehicle is travelable, a value of 0 is set in the array, and in a case where the vehicle is not travelable, a value other than 0 is set. Further, a coordinate system of respective arrangement positions with respect to the own vehicle1(travelability determination array) is described with reference toFIG.10. First, regarding the lateral position with respect to the own vehicle1, the lateral center position of the own vehicle1is set as zero, and the left direction is set as “positive” and the right direction is set as “negative”. Regarding the longitudinal direction position, a distal end position (a front end position) of a front bumper of the own vehicle1is set as 0, and the direction away from the own vehicle1in the frontward direction is set as “positive”. With respect to a stereoscopic object or an obstacle for determining a non-travelable area, by taking the stopped vehicle2which is the obstacle as an example, the positional information with respect to the own vehicle1is indicated by the position expressed by 2Pos. The lateral position of the 2Pos indicates the center of the object in the lateral width, and the longitudinal direction position is the position of a rear end of the stopped vehicle2. Further, in the drawing, the lateral width is defined as a length2yand the longitudinal width is defined as a length2x. Next, travelability determination processing P300is described with reference toFIG.11. In the travelability determination processing P300performed by the travelability determination unit300, first, in processing P310, the entire area of the travelability determination array is initialized to a value of zero as a state where the vehicle is travelable. Next, in processing P320, the number of objects detected by the stereoscopic camera20and the like is acquired, and the number of objects is set as a variable n. Here, the number of objects is a total number of moving bodies such as oncoming vehicles3, neighboring vehicles4, pedestrians, stereoscopic bodies such as guide rails6and utility poles7, and cliffs and grooves all of which are acquired from the vehicle external field recognition sensors such as the stereoscopic camera20. Next, the processing advances to processing P330where it is checked whether the variable n is 0 or more. When the variable n is 0 or less, it is determined that the objects which are not processed no more exist, and processing P300performed by the travelability determination unit300is finished. When the variable n is larger than 0, the processing advances to P340and, thereafter, processing P350is performed. Further, after processing P340and processing P350are performed, the variable n is decremented by 1 in processing P360, and the processing returns to processing P330. Accordingly, all objects detected by the stereoscopic camera20and the like are sequentially processed. In processing P340, a moving position of the object detected by the stereoscopic camera20and the like is estimated. In the case shown inFIG.7, moving of the oncoming vehicle4to the position of an oncoming vehicle4ais predicted. In the prediction of a moving (a moving speed and a moving direction), an object which is a subject of this processing is imaged by the stereoscopic camera20, and an image acquired by a result of such imaging is processed so that a parallax image is acquired. As a result, with respect to a stereoscopic object, the position of the own vehicle1and the position of the object in the longitudinal direction can be acquired from an amount of parallax. Further, by determining which position the object is imaged in the lateral direction of the image, the position of the own vehicle1and the position of the object in the lateral direction can be acquired. Then, the longitudinal direction position and the lateral direction position are acquired continuously along with a lapse of time, and a difference between the longitudinal direction position and the lateral direction position at predetermined timing and the longitudinal direction position and the lateral direction position which are acquired a fixed time before the predetermined timing is taken so that moving speeds of the object which is the subject of the processing in the longitudinal direction and the lateral direction with respect to the own vehicle1can be acquired. Further, by performing pattern matching on the objects in the imaged image, the objects are classified into moving bodies such as a pedestrian, a back surface of a vehicle, a front surface of the vehicle, a side surface of the vehicle, a bicycle or a motorcycle, and objects of a type such as stereoscopic objects which are not moving bodies such as guard rails or utility poles. These information are measured and determined by the stereoscopic camera20with respect to the respective objects, and are transmitted to the vehicle control device100. The transmitted information on the objects are first corrected with respect to moving speeds of the objects based on a result of classification to the pedestrian, the back surface of the vehicle and the like. For example, in case of the object which matches the pedestrian, it is predicted that the moving speed changes to 4 [km/h] in one direction among the frontward direction, the backward direction, the left direction and the right direction from a state that the pedestrian has not moved. Accordingly, the moving speeds in the frontward direction, the backward direction, the left direction and the right direction are corrected such that the object moves at a speed of 4 [km/h] with respect to the area A1. Further, for example, in a case where the moving object travels on the back surface of the vehicle, although it can be predicted that the moving speed changes in the frontward direction, the moving of the object in the left or right direction is extremely small and a possibility that the moving object changes its moving in the rearward direction is low. Accordingly, a moving speed of the moving object is corrected such that the moving object moves in the frontward direction of the own vehicle1. By acquiring the position and the speed of the moving body in this manner, the position of the object can be estimated while taking into account a lapse of time. In performing the prediction of the moving, when the object is not the moving body as a result of classification, it is unnecessary to predict moving and hence, the correction is not performed. With respect to the object which is not a stereoscopic object such as a cliff or a groove, a parallax amount is decreased compared to a case where a plane having the same height as a plane on which the own vehicle1travels is imaged on an image so that a result is acquired that the object is remote from the own vehicle1. The existence of the cliff or the groove is detected based on the result, and the object is detected as an object for determining a non-travelable area of the own vehicle1. Next, in processing P350, with respect to the travelability determination array, the object which is processed in processing P340is arranged based on the longitudinal direction position and the lateral direction position with respect to the own vehicle1, the longitudinal direction moving speed, the lateral direction moving speed, a longitudinal width and a lateral width of the object and a type of object so that the travelability determination array is updated. The content of processing P350is described with reference toFIG.12. First, in processing P351, the longitudinal arrangement position of the object is calculated. The longitudinal arrangement position can be acquired by dividing the longitudinal position by a longitudinal resolution and by rounding up a value acquired by division to the nearest integer. The longitudinal resolution used at this time becomes a width per one arrangement in the longitudinal arrangement position direction of the travelability determination array, and is defined as a constant such as 0.5 [m], for example. Next, the processing advances to processing P352where the left end position and the right end position of the object are calculated. The left end position is the position which is acquired by adding a half of the lateral width to the lateral position, and the right end position is a position which is acquired by subtracting the half of the lateral width from the lateral position. Next, the processing advances to processing P353where the lateral width is divided by a lateral resolution and a value acquired by the division is rounded up to the nearest integer to obtain the lateral arrangement width. The lateral resolution used at this time becomes a width per one arrangement in the lateral arrangement position direction of the travelability determination array in the same manner as the longitudinal resolution, and is defined as a constant such as 0.1 [m], for example. In the same manner, the longitudinal width is divided by the longitudinal resolution, and a value acquired by division is rounded up to the nearest integer to obtain the longitudinal arrangement width. Next, the processing advances to processing P354where as a determination threshold used in determination P355and determination P358, a value which is a half of the vehicle width (lateral width) is set. Next the processing advances to the determination P355where it is determined whether the right end position of the object is smaller than the determination threshold, that is, whether the right end position of the object is on a right side of the left end position of the own vehicle1. As a result of the determination, when the right end position of the object is on the right side of the left end position of the own vehicle1, the processing advances to processing P356. When the right end position of the object is on the left side of the left end position of the own vehicle1, the processing advances to processing P356a. In processing P356, as the lateral arrangement position indicating the position of the object on the arrangement, the left end position is divided by the lateral resolution, and a value acquired by the division is rounded up to the nearest integer to obtain the lateral arrangement position. Next, the processing advances to processing P357where the travelability determination array is updated. In right direction non-travelable line updating processing in processing P357, by setting the lateral arrangement position calculated in processing P356and the longitudinal arrangement position calculated in processing P351as a start point of the arrangement position, values of the arrangement to the lateral arrangement width calculated in processing P353toward the right direction, that is, toward the negative direction are set non-travelable. Next, in determination P358, it is determined whether the left end position is larger than a negative determination threshold, that is, whether the left end position of the object is on a left side of the right end position of the own vehicle1. In the determination P358, the determination is made only when the right end position of the object is on a right side of the left end position of the own vehicle1as a result of determination355. The determination made in determination P358that the left end position of the object is on the left side of the right end position of the own vehicle1means that when the own vehicle1advances straight forward, an object exists at a collision position. In this case, a surface of the object which is determined as a non-travelable area becomes only a back surface of the object. Since the non-travelable area which indicates the back surface of the object is set in processing P357and hence, processing P350is finished. On the other hand, in a case where it is determined that the left end position of the object is not on the left side of the right end position of the own vehicle1in the determination P358, when the direction of the own vehicle1changed to a right side, there is a possibility that the own vehicle1advances toward a left side surface of the object and hence, the processing advances to processing P359. In processing P359, a non-travelable area in the longitudinal direction is set with respect to the travelability determination array. By setting the lateral arrangement position calculated in processing P356and the longitudinal arrangement position calculated in processing P351as a start point of the arrangement position, values of the arrangement to the longitudinal arrangement width calculated in processing P353toward the front direction, that is, toward the positive direction are set non-travelable, and processing P350is finished. In a case where it is determined that the right end position of the object is not on the right side of the left end position of the own vehicle1in determination P355, the processing advances to processing P356a. In processing P356aand processing P357a, the lateral direction of the processing performed in processing P356and processing P357is reversed, and in processing P356a, as the lateral arrangement position which indicates the position of the object on the arrangement, the right end position is divided by a lateral resolution, and a value acquired by the division is rounded up to the nearest integer to obtain the lateral arrangement position. Next, the processing advances to processing P357a. In left direction non-travelable line updating processing in processing P357a, by setting the lateral arrangement position calculated in processing P356aand the longitudinal arrangement position calculated in processing P351as a start point of the arrangement position, values of the arrangement to the lateral arrangement width calculated in processing P353toward the left direction, that is, toward the positive direction are set non-travelable. Next, the processing advances to processing359a. A non-travelable area in the longitudinal direction is set with respect to the travelability determination array. By setting the lateral arrangement position calculated in processing P356aand the longitudinal arrangement position calculated in processing P351as a start point of the arrangement position, values of the arrangement to the longitudinal arrangement width calculated in processing P353toward the front direction, that is, toward the positive direction are set non-travelable, and processing P350is finished. Further, the position of the start point is set non-travelable in processing P357in a case where processing P359is performed and in processing P357ain a case where processing P359ais performed. Accordingly, it is unnecessary to set the start point non-travelable again and hence, it is preferable that updating is started from the arrangement position which be incremented by plus 1 in the frontward direction from the start point. In the role of setting the travelability determination array, the processing of the determination P355and the processing succeeding to the determination P355become complicated. In the role of setting the travelability determination array, such a role can be performed by setting the travelability determination array non-travelable such that a quadrangular shape is depicted by the lateral arrangement width and the longitudinal arrangement width acquired in processing P353using the longitudinal arrangement position and the lateral arrangement position acquired by converting the left end position calculated in processing P352in processing P356as a start point. However, the vehicle control device100is in general characterized that the increase of a processing load is maintained minimum so that the generation of heat of the microcomputer is suppressed, and an inexpensive microcomputer is desirably used even when processing performance is low. Accordingly, it is desirable to adopt the configuration succeeding to the determination P355such that only a surface projected from the own vehicle is set as a non-travelable area. Travelability determination processing P300performed by the travelability determination unit300has been described. In this embodiment, the travelability determination processing P300is mainly performed with respect to an area disposed ahead of the vehicle1on left and right sides of the vehicle1. The area disposed ahead of the vehicle1on left and right sides of the vehicle1is formed of: an area disposed ahead of the vehicle1on a left side; and an area disposed ahead of the vehicle1on a right side of the vehicle1, both of which are detected by the stereoscopic camera20and the like (vehicle external field recognition sensors) which monitor the front side of the vehicle (a traveling environment of the vehicle1) and are mounted on the vehicle1. The area disposed ahead of the vehicle1on the right side mainly indicates an area on a front side than a front end of the vehicle1and on a right side than a right end of the vehicle1, and the area disposed ahead of the vehicle1on the left side mainly indicates an area on a front side than the front end of the vehicle1and on a left side than a left end of the vehicle1. (Deceleration Limit Releasing Calculation Processing P400) Next, deceleration limit releasing calculation processing P400is described with reference toFIG.13. The travelability determination array used in the deceleration limit releasing calculation processing P400is determined as arrangement information where the bird's eye view of the own vehicle1and the surrounding of the vehicle1shown inFIG.10is divided in two dimensionally in the longitudinal direction and the lateral direction. This two dimensional arrangement range is processed by a front area which corresponds to the inside of a range of the vehicle width of the own vehicle1and a right area and a left area outside the vehicle width of the own vehicle in a divided manner. In the deceleration limit releasing calculation processing P400performed by the deceleration limit releasing calculation unit400, first, in processing P410, the position of the object ahead of the own vehicle front surface (in other words, in the depth direction with respect to the own vehicle1) is searched from the travelability determination array. In processing P410, the front area of the travelability determination array is scanned in the frontward direction of the longitudinal arrangement position, that is, toward the positive direction from the position which is the arrangement position closest to the own vehicle1in the longitudinal arrangement positions, and the non-travelable arrangement position which is firstly found, that is, the longitudinal arrangement position which is the position closest to the own vehicle1in the longitudinal direction is acquired and such longitudinal arrangement position is acquired as the non-travelable longitudinal arrangement position. If a non-travelable area does not exist in the front area, the non-travelable longitudinal arrangement position is set to an invalid value, for example, a value which indicates the outside of the arrangement, and it is regarded that the non-travelable position has not existed. Next, the processing advances to processing420where it is determined whether or not the non-travelable longitudinal arrangement position acquired in processing P410is an invalid value. When the non-travelable longitudinal arrangement position is an invalid value, this means that an object does not exist ahead of the vehicle1, that is, an emergency brake operation (an automatic brake operation) is unnecessary. Accordingly, the processing advances to processing P465where limitless deceleration is set as limit deceleration and the processing is finished. In processing P420, when the non-travelable longitudinal arrangement position acquired in processing P410is not an invalid value, that is, when the object exists ahead of the vehicle1, the processing advances to processing P430. In processing P430, the right area of the travelability determination array is searched, a degree of radius of curvature of a traveling route of the own vehicle1which allows the own vehicle1to travel or enter in the non-travelable position is detected, and such a radius of curvature is acquired as a right direction radius of curvature. Processing P430is described with reference toFIG.14. In the right direction radius-of-curvature acquisition processing in processing P430, first, in processing P43001, a negative terminal position of the lateral position arrangement in the travelability determination array is set as a non-travelable lateral position. Next, in processing P43002, a longitudinal position index is initialized at non-travelable longitudinal arrangement position acquired in processing P410. Next, in processing P43003, a lateral position initial index is set. Since the lateral position initial index is repeatedly used in processing P43005described later, the lateral position initial index is calculated in advance. The lateral position initial index is calculated by rounding up to the nearest integer by dividing a half of the own vehicle lateral width by a lateral resolution. This means that a left end of the right area of the travelability determination array is indicated. Next, the processing advances to processing P43004where a value acquired by decrementing the longitudinal position index by −1 is set as a new value of the longitudinal position index. Next, the processing advances to processing P43005where a lateral position index is initialized by the lateral position initial index set in processing P43003. Next, the processing advances to determination P43006where it is determined whether or not the lateral position index is smaller than the non-travelable lateral position. When the lateral position index is smaller than the non-travelable lateral position, the processing advances to processing P43007, while when the lateral position index becomes equal to or more than the non-travelable lateral position, the processing advances to determination P43012. In processing P43007, from the non-travelability determination array, a result of travelability or non-travelability (non-travelability determination array) is acquired as a non-travelability determination result based on the lateral position index initialized in processing P43005and the longitudinal position index acquired in processing P43004. Next, it is determined whether or not the processing advances to determination P43008where the non-travelable determination result acquired in processing P43007is determined to be non-travelable. When the non-travelable determination result is non-travelable, the processing advances to P43009. On the other hand, when the non-travelable determination result is not non-travelable, that is, is travelable, processing P43009and processing P43010are not performed, and the processing advances to processing P43011. Processing P43009is processing performed when the arrangement position which is non-travelable is detected. In processing P43009, non-travelable lateral position is set to the value of the lateral position index again. Succeeding to the above-mentioned processing, the processing advances to processing P43010where the longitudinal position index is set to the non-travelable longitudinal position. Next, in processing P43011, a value acquired by decrementing the lateral position index by −1 is set as a new lateral position index, and the processing returns to determination P43006and the processing is repeatedly performed. By performing the determination and processing ranging from determination P43006to processing P43011, the non-travelability determination array is scanned in the lateral direction. When “non-travelable” is detected, the non-travelable lateral position is updated in processing P43009, and the scanning is finished based on the determination in determination P43006. By adopting such a processing flow, unnecessary determination is not performed after the non-travelable position is detected at the closest arrangement in the lateral position. Next, when it is determined that scanning in the lateral position direction is finished in determination P43006, and the processing advances to determination P43012, the determination is made whether or not the longitudinal position index becomes equal to 0, that is, the longitudinal direction position from the own vehicle1becomes the closest position. When the longitudinal position index is equal to 0, it is estimated that scanning in the longitudinal direction is also finished, and the processing advances to P43013. When the longitudinal position index is not equal to 0, to continue scanning in the longitudinal direction, the processing returns to processing P43004. When the processing returns from the determination P43012to processing P43004, the longitudinal position index is updated in the direction that the object approaches the own vehicle1. Then, the processing from processing P43005to processing P43011is repeatedly performed so as to continue scanning in the longitudinal direction. Next, in processing P43013, a parameter having a radius of curvature defined by a two dimensional arrangement form in advance is acquired based on a non-travelable lateral position and a non-travelable longitudinal position, and is set as a right-direction radius of curvature. With respect to the parameter of the radius of curvature, radii of curvature of a traveling route of the own vehicle1which allow traveling of the own vehicle1at respective arrangement positions are calculated in advance, and are set in a ROM. When the right-direction radius of curvature is set in processing P430in this manner, the processing advances to processing P431(FIG.13), and a right-direction yaw rate is calculated based on the right-direction radius of curvature and a speed of the own vehicle1using the following formula (5). yaw rate=speed÷radius of curvature (5) Next, the processing advances to processing P432where a right limit deceleration (also referred to as a right deceleration limit value) is acquired from weight distribution information acquired in deflection estimation processing P200, that is, from a left wheel weight and a right wheel weight, and the right-direction yaw rate acquired in processing P431. With respect to the deceleration of the own vehicle1, a brake manipulation is performed such that fixed deceleration is acquired by an experiment in a state where the weight distribution with respect to the own vehicle1is changed, and degrees of generated yaw rates are set as parameters in the form of map values, and the parameters are incorporated in the ROM. Then, deceleration of the own vehicle1is acquired based on the combination of the yaw rate and the weight distribution by looking up the map values, and the deceleration is set as a right deceleration limit value. That is, it is understood that, even when deceleration equal to the right deceleration limit value acquired by such a result occurs in the own vehicle1, there is no possibility that the own vehicle1enters a non-travelable area in the right area (specifically, it is possible to suppress the occurrence of deflection of the vehicle1by which the vehicle1enters a non-travelable area which is not determined to be travelable based on a travelability determination result acquired by the travelability determination unit300) and hence, the own vehicle1does not collide with an obstacle or does not fall from a cliff or fall into a groove. One example of the behavior of the vehicle in such a state is described with reference toFIG.15.FIG.15is a bird's eye view for describing the generation of a turning force applied to the vehicle1. In the illustrated example, the vehicle1has a vehicle weight of 1000 [kg], and a load W1having a weight of 300 [kg] is loaded on a right side of the vehicle1. The position of the center of gravity of the vehicle1including the load W1moves from the center of gravity CGv1when the load W1is not included to the center of gravity CGv2on the right side. When a braking control is performed in such a situation, a brake force is uniformly generated on the left and right wheels. Symbol FR1indicates a brake force applied to the right front wheel, symbol FL1indicates a brake force applied to the left front wheel, symbol FR2indicates a brake force applied to the right rear wheel, and symbol FL2indicates a brake force applied to the left rear wheel. FR1and FL1are equal and the same brake force is applied as FR2and FL2. Accordingly, when the center of gravity is positioned at CGv1, the weight decelerated at the right wheel and the weight decelerated at the left wheel become equal, and a deceleration force DR1which is generated on the right wheel and a deceleration force DL1which is generated on the left wheel become equal. Accordingly, the vehicle1does not turn. However, in such a state, the center of gravity is at CGv2and hence, the weight decelerated at the right wheel is large, and DR1becomes small. On the other hand, the weight decelerated at the left wheel is small, and DL1becomes large. As a result, in the own vehicle1, a turning force in a counterclockwise direction indicated by R1is generated and hence, a deflection force in the left direction is generated on the vehicle1. Further, when the deceleration force is large, that is, FR1, FR2, FL1and FL2are large, magnitudes of DR1and DL1are proportionally increased, and the difference between DR1and DL1is proportionally increased and hence, a larger turning force is generated. Accordingly, the deceleration (the right limit deceleration) acquired from the combination of the yaw rate and the weight distribution described above is set to smaller deceleration when an absolute value of the yaw rate (corresponding to the position of the non-travelable area) is small, and the larger deceleration is induced when the absolute value of the yaw rate is large. Further, in a case where the weight distribution (corresponding to a deflection amount of the vehicle1) is largely offset in the lateral direction, the smaller deceleration is induced. Further, deceleration is set such that, when an amount of deviation (deviation amount) of the weight distribution in the lateral direction is small, the larger deceleration is induced. In this case, a parameter of the deceleration limit value (right limit deceleration, right deceleration limit value) is set such that the deceleration has a value only when the weight distribution is offset to a right side, and the deviation of the weight distribution to the left side does not affect the non-travelable area existing on the right area. Next, the processing advances to processing P440. In processing P440, in the same manner as processing P430, non-travelability determination array is scanned with respect to the left direction (left area this time, and a left-direction radius of curvature is acquired. Further, in processing P441, in the same manner as processing P431, a left-direction yaw rate is calculated, and in processing P442, in the same manner as processing P432, left limit deceleration (also referred to as a left deceleration limit value) is acquired. Next, the processing advances to determination P450where the right limit deceleration acquired in processing P432and the left limit deceleration acquired in processing P442are compared with each other. When the left limit deceleration is small, that is, the left limit deceleration has a value which limits the deceleration of the vehicle1more strongly, the processing advances to processing P461, and the left limit deceleration is set to the limit deceleration of the vehicle1. On the other hand, when the right limit deceleration is equal to or smaller than the left limit deceleration, the processing advances to processing P462, and the right limit deceleration is set to the limit deceleration of the vehicle1. Accordingly, the deceleration which the own vehicle1can output is decided eventually, and the deceleration limit releasing calculation processing P400is finished. The deceleration limit releasing calculation processing P400repeatedly performs the determination at a short cycle. When an obstacle ahead of the own vehicle and the own vehicle1approach each other, along with a lapse of time, the position of the map looked up in processing P432and processing P4442exhibits a value close to the own vehicle1. The closer the obstacle ahead of the vehicle and the vehicle1approach to each other, the larger value the limit deceleration takes. Even when an obstacle exists on a right area or on a left area, the closer the obstacle and the own vehicle1approach to each other, the stronger a deceleration control to be performed becomes and the remoter the obstacle and the own vehicle1are away from each other, the weaker the deceleration control becomes. In this embodiment, in a case where it is determined that an object exists ahead of the vehicle1(P420) by taking into account left side traveling (the vehicle1traveling on a left side lane) as in the case of Japan, a search is made from a right area of the travelability determination array. However, it is needless to say that the search is made from the left area of the travelability determination array. (Emergency Brake Operation Determination Processing P500) Next, emergency brake operation determination processing P500is described with reference toFIG.16. In the emergency brake operation determination processing P500performed by the emergency brake operation determination unit500, first, in determination D010, it is determined whether a state of the own vehicle1and a result of recognizing the surrounding of the own vehicle1are in a situation suitable for performing an operation of an emergency brake. For example, when any one of conditions enumerated below is satisfied, the situation is determined to be not suitable as a condition which allows an operation of the emergency brake. A failure is detected in any one of a sensor, an actuator and a control unit of the own vehicle1. Noises are generated in received data of the vehicle control device100and the vehicle control device100fails to receive data. The own vehicle1is being stopped. A gear position is set in a reverse range or in a parking range. A sudden acceleration manipulation of a driver is detected. A sudden steering manipulation of a driver is detected. A steering amount of a driver is set to a fixed value or more. A yaw rate absolute value of the own vehicle1is set to a fixed value or more. A stability control is being operated by an electronic stability control device. A detected obstacle is classified into an object such as weeds which minimally affect the own vehicle1even when the own vehicle1collides with the obstacle. A detected obstacle does not exist on a traveling route of the own vehicle1. When it is determined that a situation is not suitable as a condition which allows an operation of an emergency brake, it is determined that the emergency brake operation allowing determination is not established, and the processing advances to processing D080c. The braking operation determination is set to the emergency brake non-operation, and emergency brake operation determination processing P500is finished. When it is not determined that a situation is not suitable as a condition which allows an operation of an emergency brake, it is determined that the emergency brake operation allowing determination is established, and the processing advances to processing D020. In processing D020, a usual braking avoiding limit distance is acquired based on a relative speed between the own vehicle1and an obstacle. The usual braking avoiding limit distance indicates a limit distance which can avoid collision by a usual braking manipulation of a driver, and is acquired by the following formula (6), for example. usual braking avoiding limit distance=(0.0167×relative speed+1.00)×relative speed (6) Next, processing advances to processing D030where a usual steering avoiding limit distance is acquired from a relative speed between the own vehicle1and an obstacle and an overlapping ratio. The overlapping ratio used here indicates a ratio which an obstacle occupies with respect to an advancing route of the own vehicle1. The overlapping ratio is calculated corresponding to a lateral position and a lateral width of the obstacle, a width of the own vehicle1, and a steering situation, and is acquired by the stereoscopic camera20. The usual steering avoiding limit distance calculated in processing D030indicates a distance of a limit which can avoid a collision by a usual steering manipulation by a driver, and is acquired by the following formula (7) usual steering avoiding limit distance=(0.0067×overlapping ratio+1.13)×relative speed (7) Next, the processing advances to determination D040where usual braking avoiding limit distance acquired in processing D020and usual steering avoiding limit distance acquired in processing D030are compared to each other. When the usual braking avoiding limit distance is smaller than the usual steering avoiding limit distance, the processing advances to processing D050, and sets the usual braking avoiding limit distance as an emergency brake operation distance. When the usual steering avoiding limit distance is equal to or smaller than the usual braking avoiding limit distance as the result of determination D040, the processing advances to processing D060, and the usual steering avoiding limit distance is set as the emergency brake operation distance. The emergency brake operation distance acquired in this manner is set such that a driver feels that a physical collision cannot be avoided so as not to bring about a situation where the driver overestimates collision damage alleviation brake. After the emergency brake operation distance is acquired in processing D050or D060, the processing advances to determination D070where a distance between an obstacle and the own vehicle1(obstacle distance) and the emergency brake operation distance are compared to each other. When the distance between an obstacle and the own vehicle1(obstacle distance) is smaller, that is, when the distance between the obstacle and the own vehicle1(obstacle distance) becomes smaller than a distance for operating a brake, the processing advances to processing D080awhere an emergency brake operation is set in the emergency brake operation determination. As a result of determination D070, when the distance between the obstacle and the own vehicle1(obstacle distance) becomes equal to or more than the emergency brake operation distance, that is, when the distance between the obstacle and the own vehicle1(obstacle distance) becomes larger than a distance necessary for operating the brake, the processing advances to processing D080b. In the same manner as processing D080c, an emergency brake non-operation is set in the emergency brake operation determination, and emergency brake operation determination processing P500is finished. In performing determination D070, when an emergency brake operation has been already started, an emergency brake operation distance used in the determination is treated as a value which is acquired by adding an offset value of +5 [m] to the emergency brake operation distance. With such processing, it is possible to prevent the occurrence of a case where an obstacle distance is changed to a long value by a sensing error immediately after an emergency brake operation so that the brake operation determination momentarily determines that the emergency brake is not operable and, immediately after the determination, the non-brake operation is changed to an emergency brake operation. By performing the emergency brake operation determination processing P500in this manner, the determination can be made such that an emergency brake is operable only within a short distance which prevents a driver from overestimating the collision damage alleviation brake. (Emergency Brake Deceleration Calculation Processing P600) Next, emergency brake deceleration calculation processing P600performed by an emergency brake deceleration calculation unit600is described with reference toFIG.17. In emergency brake deceleration calculation processing P600performed by the emergency brake deceleration calculation unit600, first, in determination G010, the emergency brake operation determination acquired in emergency brake operation determination processing P500described above is checked, and it is determined whether or not an emergency brake operation is established. When the emergency brake operation is not established, the processing advances to processing G060where emergency brake deceleration is set to 0 [m/s2] and emergency brake deceleration calculation processing P600is finished. Here, setting the emergency brake deceleration to 0 [m/s2] means that an emergency brake (automatic brake) is not performed so that an operation of the brake attributed to the vehicle control device100is not performed. Next, when the emergency brake operation is established in determination G010, the processing advances to processing G020where a deceleration basic value (including a deceleration start timing basic value) is calculated. In processing G020, the deceleration basic value is calculated using the following formula (8), for example, based on a relative speed between the own vehicle1and an obstacle and an object distance. deceleration basic value=(relative speed)2/2×(obstacle distance−0.5) Here, 0.5 subtracted from the obstacle distance expresses a distance between the own vehicle1and the obstacle when a collision is avoided by finishing the emergency brake deceleration. In the result of the formula (8), the obstacle distance becomes 0.5 [m] at the time of avoiding the collision. The result of the formular (8) is largely influenced by a measurement error of an obstacle distance, and when an original value of 0.6 [m] becomes 0.5 [m], the division by zero is brought about. Accordingly, the calculation result of the formula (8) is recorded in the ROM in advance as a two-dimensional parameter using a relative speed and an obstacle distance as axes. Further, a parameter having axes which cannot be calculated such as a case where obstacle distance is 0.5 [m] or less, by setting the maximum deceleration which the vehicle can output as a parameter, it is possible to prevent the occurrence of a problem that the division by zero is brought about or the negative deceleration is calculated. At the time of performing processing G020, a deceleration basic value is acquired by looking up a parameter set in the ROM. Next, the processing advances to determination G030where the deceleration basic value acquired in processing G020and the limit deceleration acquired in the deceleration limit releasing calculation processing P400are compared with each other. When the deceleration basic value is larger than the limit deceleration, that is, when deceleration corresponding to the deceleration basic value is generated on the vehicle1so that the vehicle1is deflected by a yaw moment generated on the vehicle1whereby the vehicle1enters a non-travelable area, the processing advances to processing G040where the limit deceleration is set to the emergency brake deceleration so that the deceleration is set to a value which prevents the vehicle1from entering the non-travelable area (specifically, a value which suppresses the generation of deflection of the vehicle1which makes the vehicle1enter the non-travelable area which is not determined to be travelable by the travelability determination result acquired by the travelability determination unit300), and the emergency brake deceleration calculation processing P600is finished. Further, when the deceleration basic value becomes the limit deceleration or less as a result of determination G030, even when the deceleration corresponding to the deceleration basic value is generated on the vehicle1, there is no possibility that the vehicle1enters the non-travelable area. Accordingly, the processing advances to processing G050where the deceleration basic value is set as the emergency brake deceleration, that is, the deceleration which maximizes the avoidance and the reduction of collision damage to an obstacle ahead of the own vehicle, and emergency brake deceleration calculation processing P600is finished. The limit deceleration used in the emergency brake deceleration calculation processing P600may take a value which is not accompanied by the deceleration of the vehicle1when a non-travelable area which is remote from the own vehicle1in the longitudinal direction and is close to the own vehicle1in the lateral direction exists. In this case, a braking control is not performed with respect to the vehicle1, and braking (deceleration) start timing may be delayed compared to braking (deceleration) timing when the limit deceleration is large. By delaying the start timing of the braking, a distance that the own vehicle1travels in a situation where the deflection of the own vehicle1occurs can be shortened and hence, the manner of operation and advantageous effects substantially equal to the above-mentioned manner of operation and advantageous effects can be acquired. (Collision Warning Determination Processing P700) Next, the collision warning determination processing P700is described with reference toFIG.18. In the collision warning determination processing P700performed by a collision warning determination unit700, first, determination K010, the collision warning determination unit700determines whether a state of the own vehicle1and a result acquired by recognizing the surrounding of the own vehicle1are in a situation suitable for an operation of collision warning. For example, when any one of conditions enumerated below is established, the collision warning determination unit700determines that the situation is not a situation suitable as a condition which allows a collision warning operation. A failure is detected in any one of a sensor, an actuator and a control unit of the own vehicle1. Noises are generated in received data of the vehicle control device100and the vehicle control device100fails to receive data. The own vehicle1is being stopped. A gear position is set in a reverse range or in a parking range. A sudden acceleration manipulation of a driver is detected. A sudden brake manipulation of the driver is detected. A brake manipulation pressure of a driver becomes a fixed value or more. A sudden steering manipulation of a driver is detected. A steering amount of a driver is set to a fixed value or more. A yaw rate absolute value of the own vehicle1is set to a fixed value or more. A stability control is being operated by an electronic stability control device. A detected obstacle is classified into an object such as weeds which minimally affect the own vehicle1even when the own vehicle1collides with the obstacle. A detected obstacle exists at a position 1 m or more away from a traveling route of the own vehicle1. When the collision warning determination unit700determines that the situation is not suitable as a condition which allows an operation of collision warning, it is determined that the collision warning operation allowing determination is not established, and the processing advances to processing K060where a collision warning determination result is set as warning non-operable and the collision warning determination processing P700is finished. When the collision warning determination unit700does not determine that the situation is not suitable for a condition which allows an operation of collision warning, it is determined that the collision warning operation allowing determination is established, and the processing advances to processing K020. In processing K020, a warning operation distance is acquired from a relative speed between the own vehicle1and the obstacle and an overlapping ratio. The warning operation distance sets a value acquired by adding a warning addition distance acquired by the following formula (9) to the emergency brake operation distance acquired in the emergency brake operation determination processing P500described above as a parameter, and stores the value in the ROM. Warning addition distance=relative speed×0.8 (9) In the above formula (9), 0.8 used in the processing is a response time necessary for notification to a driver, and is set such that warning is given to the driver by 0.8 s before the processing is shifted to an emergency brake operation. When the number of users of the own vehicles1such as aged people who require a longer reaction time is increased, there may be a case where the constant of 0.8[s] is prolonged to 1.2[s]. That is, tuning is necessary in conformity with the vehicle. The parameter set in the ROM in this manner is placed as the two-dimensional parameter which uses the relative speed and the overlapping ratio as axes, and a warning operation distance is acquired by looking up the parameter in combination with the relative speed and the overlapping ratio when the calculation is performed in processing K020. Next, the processing advances to determination K030where a distance between the own vehicle1and an obstacle (obstacle distance) and the warning operation distance acquired in processing K020are compared with each other. When the obstacle distance is smaller than the warning operation distance, that is, when the current distance is short as a result of the determination, the processing advances to processing K040where a warning operation is set as a collision warning determination result, and the collision warning determination processing P700is finished. On the other hand, when the distance between the own vehicle1and the obstacle is equal to or larger than the warning operation distance as the result of the determination, the distance between the own vehicle1and the obstacle is remote so that it is determined that warning is unnecessary and the processing advances to processing K050. “non-warning” is set as the result of collision warning determination result in the same manner as the processing K060, and the collision warning determination processing P700is finished. In performing the determination K030, in a situation where a warning operation is already set in the collision warning determination result, the warning operation distance is treated as the warning operation distance +5[m]. With such treatment, it is possible to prevent the occurrence of a phenomenon where when noises generated by a measurement error are generated with respect to the distance between the own vehicle1and the obstacle and a relative speed and an overlapping ratio used in the calculation in processing K020, warning becomes temporarily inoperable in a collision warning determination result and, thereafter, a warning operation is restored immediately as the noises disappear. (Communication Data Output Processing P120) Communication data output processing P120which is a final processing is described. In communication data output processing P120performed by a communication data output unit120, based on a result acquired by the processing from processing P200to P700, the conversion into communication data, and data transmission to the brake control unit30and the meter control unit70are performed. In the conversion of communication data, emergency brake deceleration acquired in processing P600and a collision warning determination result acquired in processing P700are converted in accordance with standard relating to a communication path. For example, emergency brake deceleration calculated in accordance with a floating number type is converted into 16 [bit]. Alternatively, a collision warning determination result is transmitted by converting the collision warning determination result into digital values by allocating values such that 1 is allocated to the result during a warning operation and 0 is allocated to the result during a non-warning operation. Further, to prevent the occurrence of noises on a communication path and the erroneous transmission of excessively large emergency brake deceleration, cyclic redundancy check (CRC), parity or checksum is given to respective data as communication data. Then, by transmitting emergency brake deceleration which is converted for communication to the brake control unit30and by transmitting a collision warning determination result to the meter control unit70, the reduction and avoidance of a collision damage by a brake control of the vehicle1, and the enhancement of the reduction and avoidance of collision damage by notification to a driver can be realized. As has been described above, the vehicle control device100according to this embodiment includes: the travelability determination unit300which determines whether or not the vehicle1is travelable in an area disposed ahead of the vehicle1on left and right sides of the vehicle1; the deflection estimation unit200which estimates deflection of the vehicle1due to generation of a brake force applied to the vehicle1; and the braking control unit800which calculates deceleration and deceleration start timing based on the distance between the vehicle1and an obstacle ahead of the vehicle1and a relative speed of the vehicle to the obstacle, and changes at least one of the deceleration and the deceleration start timing based on a travelability determination result acquired from the travelability determination unit300and a deflection estimation result acquired from the deflection estimation unit200. Further, the braking control unit800, by changing at least one of the above-mentioned deceleration and the deceleration control start timing, suppresses the occurrence of the deflection of the vehicle1which leads to the entry of the vehicle1into the non-travelable area which is not determined to be travelable by the travelability determination result acquired from the travelability determination unit300. To describe in detail, the braking control unit800includes: the deceleration limit releasing calculation unit400which calculates limit deceleration for suppressing the occurrence of the deflection of the vehicle1which leads to the entry of the vehicle1into the non-travelable area which is not determined to be travelable by the travelability determination result acquired from the travelability determination unit300based on the travelability determination result acquired from the travelability determination unit300and the deflection estimation result acquired from the deflection estimation unit200; and the emergency brake deceleration calculation unit600which calculates the deceleration basic value based on the distance between the vehicle1and the obstacle ahead of the vehicle1and the relative speed, and sets either one of the deceleration basic value and the limit deceleration which takes a smaller value as the deceleration used in a deceleration control of the vehicle1. That is, the vehicle control device100according to this embodiment estimates a change in an advancing direction (deflection) of the own vehicle1based on a loaded situation or the like of the own vehicle1, detects the presence or the absence of an object which induces collision damage as a result of the deflection, and performs braking with a strong deceleration force (a limit on the deceleration force being released) which allows the generation of the change in the advancing direction when the object which induces the collision damage does not exists. On the other hand, when the object which induces the collision damage exists, braking is performed with a weak deceleration force which suppress the occurrence of a change in an advancing direction (the limit on the deceleration force being added). With such processing, according to this embodiment, even in a scene where the advancing direction of the own vehicle1(for example, a vehicle whose center of gravity largely changes laterally due to load) is changed by an operation of the automatic brake (the emergency brake) with respect to the vehicle1, when an object which induces collision damage does not exist, the reduction of collision damage and a collision avoiding performance can be enhanced by using a strong brake (in other words, by releasing a limit on the deceleration generated by the emergency brake). Hereinafter, modifications of the above-mentioned embodiment are described. <Modification 1> Deflection estimation processing P200performed by the deflection estimation unit200can be modified as follows <<Modification 1-1>> In processing P240in the above-mentioned embodiment (seeFIG.4), the lateral offset of the center of gravity of the own vehicle (in other words, a load weight being offset to the position on either the left side or the right side of the vehicle1) is estimated using a roll angle of the own vehicle1acquired from the stereoscopic camera20(an image imaged by the stereoscopic camera20). However, a plurality of weight sensors are mounted on (the left side and the right side of) the vehicle it is measured at which rate the center of gravity is offset to a left side or a right side of the vehicle1based on a weight applied to a weight sensor, and a left wheel weight and a right wheel weight (a loaded state of the vehicle1) can be acquired based on the weight applied to the weight sensor. In this case, since it is necessary to mount the weight sensor, a cost necessary for manufacturing the vehicle1is increased and the restriction is imposed on a design of the vehicle1by an amount corresponding to mounting of the weight sensors. However, this modification has an advantage that an offset of the weight of the vehicle1can be measured with high accuracy. <<Modification 1-2>> In processing P240in the above-mentioned embodiment (seeFIG.4), the lateral offset of the center of gravity of the own vehicle1(in other words, a load weight being offset to the position on either the left side or the right side of the vehicle1) is estimated using a roll angle of the own vehicle1acquired from the stereoscopic camera20(an image imaged by the stereoscopic camera20). However, in processing P240, a yaw rate which is estimated from a steering angle and an own vehicle speed and a value of the yaw rate sensor are compared to each other during acceleration or deceleration of the own vehicle1. In processing P250, it is possible that a load weight (a loaded state of the vehicle1) is estimated from a difference between the estimated yaw rate and the yaw rate sensor value. When a load is loaded on the own vehicle1in an offset manner either in a left direction or in a right direction, the vehicle1takes a straight advancing state in a state where a steering angle is slightly bent in a loading direction in an offset manner during acceleration or deceleration of the own vehicle1. At this stage, the yaw rate sensor value is held at 0[deg/s]. Accordingly, the weight difference between the left side and the right side can be estimated from the difference. In a case where this method is used, it is possible to acquire an advantageous effect that, even in a traveling environment which is dark so that the estimation of a roll angle of the own vehicle is difficult by the stereoscopic camera20, a left and right weight ratio of the own vehicle1can be acquired. <<Modification 1-3>> In processing P240in the above-mentioned embodiment (seeFIG.4), the lateral offset of the center of gravity of the own vehicle (in other words, a load weight being offset to the position on either the left side or the right side of the vehicle1) is estimated, using a roll angle of the own vehicle1acquired from the stereoscopic camera (an image imaged by the stereoscopic camera20). However, the roll angle of the own vehicle1can be acquired by also monitoring a value of a lateral acceleration sensor during straightforward traveling or during parking. This modification makes use of an action where, in a case where the own vehicle1is tilted to either the left side or the right side by a load, even when lateral acceleration is not generated on the own vehicle1, the lateral acceleration changes due to the center of gravity, and the lateral acceleration is increased in proportion to a tilting angle of the own vehicle1. By adopting such a method, it is possible to acquire an advantageous effect that, in the same manner as the modification 1-2 described above, even in a traveling environment where it is dark so that the estimation of an own vehicle roll angle using the stereoscopic camera20is difficult, a left and right weight ratio of the own vehicle1can be acquired. <<Modification 1-4>> In the embodiment, the modification 1-1, the modification 1-2 and the modification 1-3 described above, the methods which measure tilting of the own vehicle1and the offsetting of load independently by the respective sensors are described. However, by combining some or all of these techniques, the estimation of the load weight can be performed with redundancy and hence, erroneous estimation can be suppressed. <<Modification 1-5>> In the above-mentioned embodiment, the weight distribution between the left wheel and the right wheel is estimated. However, there is a method which finds the presence or the absence of the occurrence of offsetting of the own vehicle1using the load weight acquired in processing P230(FIG.4). Specifically, for example, assuming that the load weight is 500 [kg], an experiment where an emergency brake is performed is carried out with respect to a case where 500 [kg] is loaded on the right end of the own vehicle in an offset manner, and with respect to a case where 500 [kg] is loaded on the left end of the own vehicle in an offset manner. The result of the experiment is used in the maps of processing P432and processing P442(FIG.13) as a maximum offset amount when the staked weight is 500 [kg]. In this case, there is a disadvantage that, even when loading of 500 [kg] is performed without offsetting, limit deceleration becomes small in the same manner as the case where loading of 500 [kg] is performed on the right end of the own vehicle in an offset manner and a case where loading of 500 [kg] is performed on the left end of the own vehicle in an offset manner. However, in a case where it is difficult to estimate which side the center of gravity of the own vehicle is offset between the left side and the right side, for example, in a case of a route bus where the movement of the center of gravity during traveling is expected in running the route bus or a case of a vehicle which is used for transporting cattle when there is no obstacle ahead of the own vehicle on a left side and right side of the vehicle, strong deceleration can be applied in the same manner as usual loading. <Modification 2> The following modification can be also performed with respect to travelability determination processing P300by the travelability determination unit300. <<Modification 2-1>> In the above-mentioned embodiment, the technique is adopted where travelability determination array is build up by processing P350(FIG.11) using information of the stereoscopic camera20. However, a technique for acquiring information on the surrounding of the own vehicle is not limited to the stereoscopic camera20. For example, in monitoring an area behind the own vehicle, a moving speed of a pedestrian differs from a moving speed of the own vehicle1in the frontward direction and hence, it is unnecessary to monitor the pedestrian. Further, a light source such as a front lamp does not exist with respect to the area behind the own vehicle1and hence, accuracy of the stereoscopic camera20at night is lowered. Accordingly, the stereoscopic camera20is not good at the detection of the pedestrian or the like. However, by monitoring the area behind the own vehicle using a millimeter wave radar which minimally decreases detection accuracy even in bad weather and at night, an object which is arranged in the travelability determination array can be detected. The stereoscopic camera and the millimeter wave radar can be used in combination. For example, with respect to the approaching of an oncoming vehicle, by detecting a pedestrian ahead of the own vehicle by the stereoscopic camera while detecting a distance and a speed using the millimeter wave radar, and by arranging detected values on the travelability determination array, an area in which the own vehicle1is travelable can be accurately discriminated. Further, by using a plurality of sensing simultaneously, sensing can be performed with redundancy and hence, it is possible to acquire information with high accuracy. <<Modification 2-2>> In the above-mentioned embodiment and the modification 2-1, the method in which the sensor which monitors an environment of the surrounding of the own vehicle (the vehicle external field recognition sensor) is mounted on the own vehicle1is adopted. However, a sensor which monitors a pedestrian, a vehicle and the like (a vehicle external world recognition sensor) is mounted on a road and equipment on the surrounding of the road such as a traffic signal, and information acquired from the respective sensors are transmitted to the own vehicle1as travelability determination array information or as object information for arranging the own vehicle1in the travelability determination array by communication equipment mounted on the road. The own vehicle1may be configured to generate the travelability determination array using the transmitted information or may be configured to perform calculation for limiting deceleration of the own vehicle1using the received travelability determination array per se. When the method is used, a millimeter wave radar can be used in a dark place where the distance measurement by the stereoscopic camera20becomes difficult, and the stereoscopic camera20is used at an intersection or the like where the number of pedestrians is large so that the distance measurement by a millimeter wave radar becomes difficult. Accordingly, sensing using an appropriate device at an appropriate place can be performed and hence, it is unnecessary for the own vehicle1to fully mount sensors for recognizing an external field of the vehicle so as to cope with all situations. As a result, it is possible to acquire an advantageous effect that a vehicle cost can be lowered and mileage is improved due to lowering of the weight of the vehicle. <Modification 3> Deceleration limit releasing calculation processing P400by the deceleration limit releasing calculation unit400can be modified as follows. In processing P430and processing P440(FIG.13), a radius of curvature of an advancing route of the own vehicle1which reaches an area indicated by the travelability determination array is acquired and, thereafter, limit deceleration on the left side and the right side is acquired in processing P431, processing P432, processing P441, and processing P442. In this method, a behavior when a uniform brake pressure is applied to left and right wheels of the vehicle1is formed into a model. On the other hand, when an electronic control brake force distribution system and a lane departure prevention support system intervenes during an operation of the emergency brake, a turning force in reverse direction is applied to the own vehicle1. Accordingly, when there is a fixed distance or more from an obstacle ahead of the own vehicle and a speed difference is small, a traveling route of the own vehicle1when the emergency brake is operated cannot be expressed by a simple radius of curvature. Accordingly, which amount of deceleration is necessary so as to enable the own vehicle1to reach the corresponding position is acquired by an experiment based on the weight distribution of the own vehicle1, a lateral direction position and a longitudinal direction position of an obstacle with respect to the own vehicle1, and such a value is set as a parameter. In this case, limit deceleration can be directly calculated from the weight distribution of the own vehicle1, and the lateral direction position and the longitudinal direction position of the obstacle with respect to the own vehicle1. Further, in this case, by taking into account an influence where the traveling route of the own vehicle1is deflected by the electronic control brake force distribution system and the lane departure prevention support system, the number of scenes where stronger deceleration is applied to the own vehicle1is increased. <Modification 4> Collision warning determination processing P700by the collision warning determination unit700can be modified as follows. When limit deceleration induced as a result of deceleration limit releasing calculation is remarkably smaller than the deceleration basic value acquired by the formula (8), this situation indicates that a collision cannot be avoided with high probability. In this case, to avoid the collision, in emergency brake operation determination processing P500, at the stage before an emergency brake operation is performed, a braking manipulation or steering by a driver becomes necessary. Accordingly, a difference between limit deceleration and the deceleration basic value acquired from the formula (8) is acquired, an offset amount is added where a warning operation distance acquired by processing K020(FIG.18) becomes remote corresponding to the result, and a driver is requested to perform a brake manipulation in an area remote from an obstacle. Accordingly, it is possible to promote the avoidance of a collision by driving of a driver. On the other hand, it is unnecessary to perform the operation of the brake earlier and hence, it is also possible to prevent the driver from overestimating collision avoidance. It is needless to say that the present invention is not limited to the above-mentioned embodiment, and various modifications are included in the present invention. For example, the above mentioned embodiment is described in detail so as to facilitate the understanding of the present invention, and is not necessarily limited to the vehicle control device which includes the whole configuration described in the embodiment. Further, some parts of the configuration of the embodiment can be replaced with the configuration of another embodiment, or the configuration of other embodiment can be added to the configuration of another embodiment. Further, with respect to some parts of the configuration of the embodiment, the addition of other configuration, the deletion of the configuration elements and the replacement of the configuration elements with other constitutional elements may also be possible. Further, some parts or the entirety of the respective configurations, functions, processing units, processing means and the like described above may be realized by hardware by designing using an integrated circuit, for example. Further, the above-mentioned respective configurations, functions and the like may be realized by software which interprets and executes programs with which processor realizes the respective functions. Information on programs, tables, files and the like which realize the respective functions may be stored in a storage device such as a memory, a hard disc a solid state drive (SSD), or a recording medium such as an IC card, an SD card and a DVD. Further, the control line and information lines which are considered necessary for describing the present invention are indicated, and it is not always the case that all control lines and information lines which are necessary from a viewpoint as a product are indicated. It is safe to say that, in the actual device and system, substantially all configuration elements are mutually connected to each other. LIST OF REFERENCE SIGNS 1: vehicle (own vehicle)2: stopped vehicle3: neighboring vehicles (other vehicles)4: oncoming vehicle5: pedestrian6: guard rail7: utility pole20: stereoscopic camera (vehicle external field recognition sensor)25: rear stereoscopic camera (vehicle external field recognition sensor)27: side stereoscopic camera (vehicle external field recognition sensor)30: brake control unit40: power train control unit41: engine42: transmission60: brake70: meter control unit71: display device72: buzzer100: vehicle control device110: communication data acquisition unit120: communication date output unit200: deflection estimation unit300: travelability determination unit400: deceleration limit releasing calculation unit500: emergency brake operation determination unit600: emergency brake deceleration calculation unit700: collision warning unit800: braking control unit | 100,290 |
11858503 | The figures depict various embodiments of the disclosed technology for purposes of illustration only, wherein the figures use like reference numerals to identify like elements. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated in the figures can be employed without departing from the principles of the disclosed technology described herein. DETAILED DESCRIPTION Vehicles are increasingly being equipped with intelligent features that allow them to monitor their surroundings and make informed decisions on how to react. Such vehicles, whether autonomously, semi-autonomously, or manually driven, may be capable of sensing their environment and navigating with little or no human input. The vehicle may include a variety of systems and subsystems for enabling the vehicle to determine its surroundings so that it may safely navigate to target destinations or assist a human driver, if one is present, with doing the same. As one example, the vehicle may have a computing system for controlling various operations of the vehicle, such as driving and navigating. To that end, the computing system may process data from one or more sensors. For example, an autonomous vehicle may have optical cameras for recognizing hazards, roads, lane markings, traffic signals, and the like. Data from sensors may be used to, for example, safely drive the vehicle, activate certain safety features (e.g., automatic braking), and generate alerts about potential hazards. Autonomous, semi-autonomous, or manually-driven vehicles may be used by a transportation management system to provide ride services or other types of services. A transportation management system may comprise a fleet of such vehicles. Each vehicle in the fleet may include one or more sensors in a sensor suite. In general, a vehicle can traverse a geographic location or region using a number of different routes. Each route can be made up of one or more road segments. When traveling on a given road segment, a computing system in a vehicle can continually process data from one or more sensors in the vehicle, for example, to identify potential hazards such as fallen debris, jaywalkers, slick road surface, and the like. The computing system can also control various operations of the vehicle, such as driving and navigating, in view of the potential hazards. Under conventional approaches, a vehicle typically detects potential hazards associated with a given road segment as the vehicle navigates the road segment. Under such conventional approaches, the vehicle is typically unaware of risks associated with the potential hazards before navigating the road segment. Delayed awareness of the risks associated with the potential hazards can negatively impact safety considerations in relation to the vehicle and surrounding environment. Conventional approaches pose disadvantages in addressing these and other problems. An improved approach in accordance with the present technology overcomes the foregoing and other disadvantages associated with conventional approaches. The improved approach may include multiple, different phases including a first phase for building a scenario information database including (i) real world sensor data and features corresponding to road segments in geographic areas comprising a variety of different road segment types, (ii) scenario classification and identification information based on collected real world sensor data for the variety of different road segment types, and (iii) risk profiles associated with the different classified scenarios and scenario types. A second phase may include determining respective similarities between classified (or highly sampled) road segments and unclassified (or less frequently sampled) road segments to infer scenario information (e.g., scenarios, scenario exposure rates) for road segments without requiring high frequency sampling of such road segments. Finally, risk profiles associated with different scenario exposure rates, as applied to different classified road segments, may be used to navigate vehicles in a region, determine capable or eligible autonomous vehicles for a route or region, determine a deployment strategy for different regions, cities, and neighborhoods based on the relevant risk profiles, and any other relevant use cases where risk and scenario exposure rates may be used for fleet management of autonomous, semi-autonomous, and human-driven vehicles. In some embodiments, a similarity between a first road segment and a second road segment can be determined. For example, the first road segment may be a classified road segment associated with a risk profile that is based on scenario exposure rates. In this example, the second road segment may be an unclassified road segment that is not yet associated with a risk profile. In some embodiments, upon determining a threshold level of similarity between the first road segment and the second road segment, information associated with the first road segment can be used to infer information about the second road segment. For example, the risk profile, which can be based on scenario exposure rates for the first road segment, can be associated with the second road segment upon determining a threshold similarity between the first road segment and the second road segment. In some embodiments, a similarity between the first road segment and the second road segment may be determined based in part on respective sensor data collected by vehicles while sampling the first road segment and the second road segment, respective map data describing the first road segment and the second road segment, respective metadata associated with the first road segment and the second road segment, or a combination thereof. In other embodiments, road segments can be categorized (or classified) into road segment types. For example, in some embodiments, a road segment can be categorized as a road segment type based on scenario exposures associated with the road segment. For example, a road segment can be categorized as a particular road segment type based on a threshold level of similarity between scenarios that are determined to be associated with the road segment and scenario types associated with the road segment type. In some embodiments, each road segment type can be associated with a risk profile that is based on respective likelihoods (or probabilities) of various scenario types that may be encountered when navigating a road segment categorized as the road segment type. Such risk profiles can be used for myriad purposes. For example, an autonomous, semi-autonomous, or manually-driven vehicle may be instructed to avoid road segments on which the vehicle has a threshold likelihood of being exposed to a given scenario type. For instance, a vehicle may be instructed to avoid using a road segment on which the vehicle has a threshold likelihood of encountering bicycle traffic. In another example, a vehicle may be instructed to modify its operation when traveling on a road segment on which the vehicle has a threshold likelihood of being exposed to a given scenario type. For instance, a vehicle may be instructed to reduce its speed and activate its hazard lights when traveling on a road segment on which the vehicle has a threshold likelihood of encountering poor visibility conditions. In some embodiments, risk profiles associated with individual road segments within a given geographic location (or region) (e.g., a city, county, zip code, state, country, or some other defined geographic region) can be used to generate a value of an aggregate risk for a fleet of autonomous (or semi-autonomous) vehicles operating in that geographic region. While examples of the present technology are sometimes discussed herein in relation to an autonomous vehicle, the present technology also applies to semi-autonomous and manually driven vehicles. More details relating to the present technology are provided below. FIGS.1A-1Cillustrate various scenarios that may be experienced and determined by a vehicle. A vehicle may experience a variety of scenarios as it navigates a given geographic location (or region). In general, different geographic locations may present different challenges and risks for a vehicle. For example,FIG.1Aillustrates one example environment100that corresponds to a school zone. In this example, a vehicle102is shown navigating a road segment104along which a school106is located. When navigating such environments, the vehicle102may encounter a number of different scenarios such as children108walking through a crosswalk110and pedestrians112crossing the road segment104. Other environments may pose different challenges and risks. For example,FIG.1Billustrates another example environment130which includes a highway132. In this example, a vehicle134is shown traveling on the highway132under inclement weather conditions. When navigating such environments, the vehicle134may encounter a number of different scenarios such as debris136blocking a lane of the highway132and other hazardous activity138(e.g., collisions) involving other vehicles traveling on the highway132. Accordingly, different road segments may be associated with different risks. In general, a road segment can include any portion of a physical road network, for example, as characterized or represented by a geographic map. The map may have different levels (or layers) of detail (e.g., different road segments for different lanes of a road, all lanes of the road considered part of the same road segment, etc.). The map may include embedded information (e.g., a segmentation map) having applications for autonomously, semi-autonomously, or manually driven vehicles. In an embodiment, lengths of road segments may be uniform (e.g., all road segments have a uniform length such as 100 yards). In an embodiment, lengths of road segments may be non-uniform. For example, road segments can have different lengths based on their road segment type. That is, for example, suburban roads may have road segments every 100 yards while highways have road segments every quarter mile. In general, a vehicle may be equipped with one or more sensors which can be used to capture environmental information, such as information describing a given road segment. For example, in some instances, a vehicle may be equipped with one or more sensors in a sensor suite including optical cameras, LiDAR, radar, infrared cameras, and ultrasound equipment, to name some examples. These sensors can be used to collect information that can be used by the vehicle to understand environmental conditions of a given road segment to permit safe and effective navigation of the road segment. For example,FIG.1Cillustrates an example environment160in which a vehicle162is navigating a road segment164. The vehicle162can be, for example, a vehicle940as shown inFIG.9. InFIG.1C, the vehicle162includes a sensor suite166that can be used to sense static (or stationary) objects, dynamic objects, and semi-permanent (or ephemeral) objects that are around (or within some threshold proximity of) the vehicle162. In this example, information collected by sensors included in the sensor suite166can be used to determine information about the road segment164. For instance, sensors in the sensor suite166can be used to recognize a crosswalk168, children170waiting to use the crosswalk168, a pedestrian172jaywalking across the road segment164, street signs174, other vehicles176present on the road segment164, and any other objects that are present. In addition to identifying objects, the sensors in the sensor suite166can also be used to monitor the identified objects. For example, once an object is identified, the sensors can be used to trace (or track) a path (or trajectory) of the object over time. Information collected by the sensors in the sensor suite166can be used to determine other descriptive features for the road segment164. For example, such information can be used to determine road features describing the road segment164, such as a length of the road segment164and a road quality of the road segment164. In another example, the information can be used to determine contextual features describing the road segment164, such as when the information was collected by the sensors in the sensor suite166(e.g., time of day, day, etc.) and weather conditions experienced while the information was collected by the sensors in the sensor suite166. In some instances, rather than having a sensor suite, a vehicle may be equipped with a computing device that includes a number of integrated sensors. In such instances, sensors in the computing device can collect information that can be used by the vehicle to understand and navigate a given environment. In various embodiments, information collected by the integrated sensors can similarly be used to determine features (e.g., objects, road features, and contextual features) for a given road segment. For example, a mobile phone placed inside of the vehicle162may include integrated sensors (e.g., a global positioning system (GPS), optical camera, compass, gyroscope(s), accelerometer(s), and inertial measurement unit(s)) which can be used to capture information and determine features for the road segment164. As mentioned, information collected by sensors can be used to identify features for a given road segment. In some embodiments, the detection of certain features can be used to determine scenarios occurring on or along a given road segment. For example, a scenario may be defined as a pre-defined combination of features that may involve, for example, one or more objects or object identifiers, one or more road features, one or more contextual features, or some combination thereof. As just one example, a scenario can describe a person riding a bicycle on a rainy day. As another example, a scenario can describe a person in a wheelchair who is crossing a road segment. However, scenarios need not be identified (or counted) based on pre-defined criteria (or pre-defined combinations of features) alone. For example, new scenarios and corresponding scenario types may be identified and classified even if those scenarios (or scenario types) do not have a sufficient (or threshold) amount of similarity to a known scenario (or scenario type), for example, as stored in a scenario information database. Thus, in some embodiments, a system can determine that a grouping of identified features are more similar to one another than any existing scenario (or scenario type) and may generate a new scenario and/or scenario type for those features. These identified features may include an identified object (or object type), geographic features, road features (e.g., intersection type, presence of intersection traffic control (e.g., stop sign, yield sign, etc.), presence of intersection pedestrian control, lane boundary type, type of lane use, lane width, roadway alignment, roadway classification, roadway features, roadway grade, roadway lane count, roadway parking, roadway zones, speed limit, roadway surface type, roadway traffic way type, and route-based intersection context information (e.g., U-turn, etc.)), to name some examples. Many different scenarios are possible. Scenarios determined for a road segment can be logged and associated with the road segment. In some embodiments, other information can be logged and associated with the road segment including, for example, a date, time, and any other relevant contextual information that may affect the occurrence of events associated with the road segment or use of a road segment (e.g., weather, day, time, holiday, weekend, weekday, etc.). Further, in some embodiments, scenarios logged for a given road segment can be used to categorize the road segment. For example, a road segment for which a set of scenarios were logged can be categorized as a particular road segment type based on a threshold level of similarity between scenarios logged for the road segment and scenario types associated with the particular road segment type. Once categorized, the road segment can be associated with information describing its corresponding road segment type, such as a risk profile. A risk profile determined for a road segment can be used to inform driving and navigation behavior of a vehicle in relation to the road segment (or a series of road segments along a route or multiple routes associated with the vehicle). In some instances, classifying road segments based on scenario exposure data can be cumbersome. For example, a vast amount of scenario exposure data may need to be collected for road segments before those road segments can be classified. Thus, in some embodiments, rather than relying on scenario exposure data, a system may determine road segment similarity based on a comparison of features (e.g., road features, etc.). More details relating to the present technology are provided below. FIG.2illustrates an example system200including an example road segment classification module202, according to an embodiment of the present technology. As shown in the example ofFIG.2, the road segment classification module202can include a road segment similarity module204, a segment type classification module206, and an application module208. In some instances, the example system200can include at least one data store220. In some embodiments, some or all data stored in the data store220can be stored by a transportation management system960ofFIG.9. In some embodiments, some or all data stored in the data store220can be stored by the vehicle940ofFIG.9. The components (e.g., modules, elements, etc.) shown in this figure and all figures herein are exemplary only, and other implementations may include additional, fewer, integrated, or different components. Some components may not be shown so as not to obscure relevant details. In some embodiments, some or all of the functionality performed by the road segment classification module202and its sub-modules may be performed by one or more backend computing systems, such as the transportation management system960ofFIG.9. In some embodiments, some or all of the functionality performed by the road segment classification module202and its sub-modules may be performed by one or more computing systems implemented in a vehicle, such as the vehicle940ofFIG.9. As discussed in more detail in reference toFIG.7, in various embodiments, sensor data, raw or processed, can be processed by a vehicle or by an off-board computing system for scenario classification or feature identification. The road segment classification module202can be configured to communicate and operate with the at least one data store220, as shown in the example system200. The at least one data store220can be configured to store and maintain various types of data. For example, the data store220can store information describing road segment types and respective information associated with the road segment types. For example, the data store220can store information describing a road segment type and a set of scenario types that are associated with the road segment type. The data store220can also store information such as corresponding risk profiles for road segment types. In some embodiments, a risk profile for a road segment type can be based on respective probabilities of various scenario types occurring while a vehicle navigates a road segment that has been categorized as the road segment type. In some embodiments, a risk profile for a road segment type can be based on one or more scenario exposures for the road segment type and some unit measuring distance. As just one example, a risk profile for a road segment type can indicate in relation to a scenario exposure that the road segment type exposes a vehicle to five jaywalkers per mile. In some embodiments, a risk profile for a road segment type can be associated with information describing operations to be performed by vehicles for mitigating scenario types or scenario exposures when navigating road segments corresponding to the road segment type. For example, a risk profile for a road segment type may be associated with generation of instructions or commands that cause a vehicle to navigate autonomously or semi-autonomously in accordance with the risk profile, such as decreasing its speed to a pre-defined speed limit when navigating road segments that have been categorized as the road segment type. In an embodiment, the instructions or commands can be provided to a human driver of a manually-driven vehicle. In an embodiment, the instructions or commands can include control commands for one or more actuators associated with a vehicle as determined based on the risk profile. In an embodiment, risk profiles can be determined based on historical ride information, for example, as stored and managed by a transportation management system (e.g., the transportation management system960ofFIG.9). In an embodiment, historical ride information can be used to identify risks to human drivers. For example, scenario types can be cross-correlated with geographic regions within which accidents or claims have occurred in the past. In some embodiments, some or all data stored in the data store220can be stored by the transportation management system960ofFIG.9. In some embodiments, some or all data stored in the data store220can be stored by the vehicle940ofFIG.9. More details about information that can be stored in the data store220are provided below. The road segment similarity module204can be configured to determine respective similarities between a pair of road segments or between a road segment and a road segment type. For example, in some embodiments, a similarity between a classified road segment (i.e., a road segment associated with a risk profile and scenario exposure rates) and an unclassified road segment (i.e., a road segment that is not associated with a risk profile or scenario exposure rates) may be determined based on a comparison of their features (e.g., road features, etc.). When a threshold level of similarity between the classified road segment and the unclassified road segment is determined, various information associated with the classified road segment can also be associated with the unclassified road segment. For instance, the unclassified road segment can be associated with a risk profile that corresponds to the classified road segment. Similarly, in some embodiments, a similarity between a road segment type (i.e., a road segment type associated with a risk profile and scenario exposure rates) and an unclassified road segment (i.e., a road segment that is not associated with a risk profile or scenario exposure rates) may also be determined based on a comparison of their features (e.g., road features, etc.). When a threshold level of similarity between the road segment type and the unclassified road segment is determined, various information (e.g., scenario information such as scenarios, scenario exposure rates) associated with the road segment type can also be associated with the unclassified road segment. For instance, the unclassified road segment can be associated with a risk profile that corresponds to the road segment type. More details regarding the road segment similarity module204will be provided below with reference toFIG.3A. The segment type classification module206can be configured to categorize (or classify) road segments into road segment types. For example, in some embodiments, the segment type classification module206can identify a road segment being navigated by a vehicle. The segment type classification module206can also determine that the identified road segment corresponds to some road segment type. When categorized as the road segment type, the identified road segment can also be associated with various information describing its corresponding road segment type. For instance, the identified road segment can be associated with a risk profile for the road segment type. Such associated information can be used by a vehicle to gain various insights into the identified road segment that would not otherwise be readily available to the vehicle. More details regarding the segment type classification module206will be provided below with reference toFIG.3B. The application module208can be configured to use information determined by the road segment similarity module204and the segment type classification module206for various applications. In some embodiments, such information can be used to generate a value of an aggregate risk for a fleet of vehicles operating in a geographic region (e.g., a city, county, zip code, state, country, or some other defined geographic region). In an embodiment, the aggregate risk can be determined based on historical ride information, for example, as stored and managed by a transportation management system (e.g., the transportation management system960ofFIG.9). For example, in some embodiments, a value corresponding to an aggregate risk for a fleet of vehicles may be determined based on a combination (e.g., product) of (1) a value relating to a fleet exposure to scenario type, (2) a value relating to an efficacy of self-driving system in view of scenario type, and (3) a value relating to a severity of adverse outcome. In this regard, the aggregate risk for a fleet of vehicles operating in a geographic region can be based on individual assessments of risk, or risk profiles, associated with a simulation of how a vehicle performs on road segments in the geographic region and their respective scenario exposure rates. In some embodiments, the value relating to a fleet exposure to scenario type can be determined based on an exposure of the fleet to the scenario type while navigating road segments in the geographic region for which the value of the aggregate risk is being determined. For example, the geographic region may include a first road segment type and a second road segment type. In this example, a vehicle traveling on the first road segment type may be exposed to five jaywalkers per mile while a vehicle traveling on the second road segment type may be exposed to one jaywalker per mile. Here, if the fleet of vehicles drives 100 miles on the first road segment type, the fleet can be expected to encounter 500 instances jaywalking. Similarly, if the fleet drives 100 miles of the second road segment type, the fleet can be expected to encounter 100 instances of jaywalking. Using this approach, an aggregate scenario exposure of the fleet to various scenario types can be determined. In some embodiments, the value relating to an efficacy of an autonomously, semi-autonomously, or manually driven vehicle (or system) in view of scenario type measures a probability of a fleet vehicle experiencing a traffic collision while navigating the geographic region for which the value of the aggregate risk is being determined. In some embodiments, the value relating to efficacy can be determined for the geographic region based on sensor data logged by vehicles while navigating the geographic region. In such embodiments, simulated behavior of vehicles can be evaluated with respect to real-world scenarios that were encountered by vehicles while navigating the geographic region. Other approaches for measuring efficacy are possible. For example, in some embodiments, real-world sensor data for certain scenarios may not be available. In such instances, the value relating to efficacy can be determined by structuring scenarios (or tests) at a test facility to log corresponding sensor data and then evaluating a simulation of vehicles against this sensor data. In some instances, it may not be feasible to structure scenarios (or tests) for purposes of measuring efficacy. Thus, in another example, the value relating to efficacy can be determined by programmatically generating scenario instances in a simulated world and then evaluating a simulation of vehicles against the programmatically generated scenario instances. In some embodiments, the value relating to a severity of adverse outcome can be determined based on simulated collisions involving vehicles. These simulated collisions can be associated with collision parameters that measure human injury or property damage resulting from the simulated collisions. In some embodiments, the aggregate risk for the fleet of vehicles for the geographic region can be used to determine when to deploy the fleet of vehicles within the geographic region. In another example, the application module208can use information determined by the segment type classification module206to determine similarities between geographic regions (e.g., cities, counties, zip codes, states, countries, or some other defined geographic region). For example, assume a first geographic region (e.g., a first city) and a second geographic region (e.g., a second city) have been determined to be similar based on their respective types of road segments. Assume further that certain information has been determined for the first geographic region. The certain information can include, for example, a collective risk profile associated with the types of road segments of the first geographic region, an aggregate scenario exposure associated with the types of road segments of the first geographic region, or a value of an aggregate risk for a fleet of vehicles operating in the first geographic region. In this example, based on determined similarity between the first geographic region and the second geographic region, the collective risk profile, the aggregate scenario exposure, and the aggregate fleet risk corresponding to the first geographic region can be applied to the second geographic region. In this manner, substantial savings in time and computing resources can be achieved in generating vital information about the second geographic region based on the determined geographic similarity. FIG.3Aillustrates an example road segment similarity module302, according to an embodiment of the present technology. In some embodiments, the road segment similarity module204ofFIG.2can be implemented with the road segment similarity module302. The road segment similarity module302can be configured to determine similarities between road segments. For example, in some embodiments, a similarity between a classified road segment and an unclassified road segment may be determined based on a comparison of their features (e.g., road features, etc.). When a threshold level of similarity between the classified road segment and the unclassified road segment is determined, various information associated with the classified road segment can also be associated with the unclassified road segment. For instance, the unclassified road segment can be associated with a risk profile that corresponds to the classified road segment. Similarly, the road segment similarity module302can be configured to determine similarities between road segments and road segment types. For example, in some embodiments, a similarity between a road segment type and an unclassified road segment may be determined based on a comparison of their features (e.g., road features, etc.). When a threshold level of similarity between the road segment type and the unclassified road segment is determined, various information associated with the road segment type can also be associated with the unclassified road segment. As shown in the example ofFIG.3A, the road segment similarity module302can include an information database module304, a segment similarity module306, and an information mapping module308. The information database module304can be configured to access and manage a scenario information database. For example, the scenario information database may be accessible through a data store, such as the data store220ofFIG.2. In some embodiments, the scenario information database may be generated as part of a first phase in a multi-phase process. In some embodiments, the scenario information database can include (i) real world sensor data and features corresponding to road segments (e.g., highly sampled, classified road segments) in geographic areas comprising a variety of different road segment types, (ii) scenario classification and identification information based on collected real world sensor data for the variety of different road segment types, and (iii) risk profiles associated with the different classified scenarios and scenario types. The sensor data that constitutes a foundation for the scenario information database can be acquired and maintained in the first phase of the multi-phase process through, for example, sampling via sensors on vehicles that have driven along the road segments described by the data in the scenario information database. The segment similarity module306can be configured to determine respective similarities between classified (or highly sampled) road segments and unclassified (or less frequently sampled) road segments. The segment similarity module306can determine a threshold level of similarity or matching between a classified road segment and an unclassified road segment. Satisfaction of the threshold level of similarity between road segments can result in a determination of similarity between the road segments. Similarly, the segment similarity module306can determine a threshold level of similarity or matching between a road segment type and an unclassified road segment. Satisfaction of the threshold level of similarity between the road segment type and the unclassified road segment can result in a determination of similarity between the road segment type and the road segment. In some embodiments, similarities between road segments (or between a road segment and a road segment type) can be determined as part of a second phase in the multi-phase process. In such embodiments, information (e.g., features, scenarios, contextual data, etc.) stored in the scenario information database can be used to determine similarities between road segments. For example, the segment similarity module306may determine a similarity between a first road segment and a second road segment based on a comparison of their features (e.g., road features, etc.). In some embodiments, road features may include sampled (or collected) information describing objects associated with a given road segment as well as any permanent and ephemeral features associated with the road segment. Other examples of road features include geographic attributes (e.g., a shape or path of a road segment—straight, curved, etc.), metadata associated with the road segment (e.g., map features, zoning, surrounding businesses, census tracts, etc.), and detailed sensor data related to the configuration of the road segment (e.g., lane types, lane widths, existence of stop signs, etc.). Such road features may be considered permanent or semi-permanent features associated with the road segment. The segment similarity module306may use other types of features when determining road segment similarity. For example, in some embodiments, the segment similarity module306may use map data that describes road configurations (e.g., lane types, lane widths, existence of stop signs, etc.). In some embodiments, the segment similarity module306can use visual data describing road segments (e.g., street view images, point clouds, etc.). In some embodiments, the segment similarity module306may also use sensor and scenario information that has been collected from road segments, if any, as an additional consideration when determining road segment similarity. For example, if a bus is determined to be present fifty percent of the time that a vehicle passes a road segment, that determination (or feature) may be used in addition to other features when determining road segment similarity. In some embodiments, the segment similarity module306can compare features associated with an unclassified road segment to features associated with road segments (or road segment types) in the scenario information database of known and classified road segments to identify a set of most similar road segments (or road segment types) for the unclassified road segment. Once the set of most similar road segments (or road segment types) for the unclassified road segment is determined, the scenario exposure rates for the unclassified road segment can be determined using scenario exposure rates that are known to be associated with the set of most similar road segments. As a result, many road segments can be evaluated for similarity and scenario exposure rates without having to individually and extensively sample (or drive) those road segments. In some embodiments, once a threshold level of similarity or matching between a classified road segment and an unclassified road segment is determined, the information mapping module308can determine information associated with the classified road segment. The information mapping module308can then associate the determined information with the unclassified road segment. For example, in some embodiments, the classified road segment may be associated with a risk profile that is based on scenario exposure rates for the classified road segment. In this example, the information mapping module308can associate the risk profile with the unclassified road segment based upon the threshold similarity determination between the classified road segment and the unclassified road segment. FIG.3Billustrates an example segment type classification module352, according to an embodiment of the present technology. In some embodiments, the segment type classification module206ofFIG.2can be implemented with the segment type classification module352. As mentioned, the segment type classification module352can be configured to evaluate and categorize road segments as road segment types. In some embodiments, the segment type classification module352can categorize a road segment as a road segment type based on a threshold level of similarity between scenarios determined for the road segment and scenario types associated with the road segment type. As shown in the example ofFIG.3B, the segment type classification module352can include a scenario prediction module354, a segment similarity module356, and a segment mapping module358. The scenario prediction module354can be configured to determine (or predict) scenarios for a road segment being categorized. In some embodiments, scenarios for the road segment can be determined based on, for example, one or more objects detected by a vehicle traveling on the road segment, one or more road features describing the road segment, one or more contextual features corresponding to the road segment, or a combination thereof. More details regarding the scenario prediction module354will be provided below with reference toFIG.4. The segment similarity module356can be configured to determine a road segment type that is most similar to a road segment being categorized. In some embodiments, a road segment can be categorized as a road segment type when a set of scenarios determined (or predicted) for the road segment have a threshold level of similarity to scenario types associated with the road segment type. More details regarding the segment similarity module356will be provided below with reference toFIG.5. The segment mapping module358can be configured to categorize (or map) a road segment as a given road segment type. For example, a road segment that is determined to correspond to a given road segment type can be associated with various information that is relevant to that road segment type. In some embodiments, this associated information can include, for example, a risk profile corresponding to the road segment type and instructions for operating a vehicle when navigating road segments that correspond to the road segment type. FIG.4illustrates an example scenario prediction module402, according to an embodiment of the present technology. In some embodiments, the scenario prediction module354ofFIG.3Bcan be implemented with the scenario prediction module402. As mentioned, the scenario prediction module402can be configured to determine (or predict) scenarios for a given road segment. In some embodiments, a scenario may be determined (or predicted) for a road segment based on a combination of specific factors involving the presence of, for example, one or more objects detected on the road segment, one or more road features corresponding to the road segment, and one or more contextual features describing the road segment. In some embodiments, scenarios predicted for a road segment can be used to determine whether the road segment can be categorized as some pre-defined road segment type. As shown in the example ofFIG.4, the scenario prediction module402can include a sensor data module404, a feature determination module406, a scenario determination module408, and a scenario mapping module410. The scenario prediction module402can be configured to communicate and operate with a data store, such as the data store220. For example, the data store220can store sensor data collected by vehicles. In some embodiments, the sensor data can be labeled based on a geographic location from which the sensor data was collected. For example, sensor data collected by sensors in a vehicle while navigating a given road segment can be associated with that road segment. The data store220can also store pre-defined scenario data that can be used to recognize and identify scenarios. For instance, a given scenario can be associated with a set of features (e.g., objects or object identifiers, road features, contextual features) which, when detected on a road segment, can be used (in real time or near real time) by a vehicle to recognize and log the scenario in association with the road segment. In some embodiments, scenarios may be organized in a multi-level or tiered taxonomy reflecting various degrees of generality and specificity. For example, the data store220may store the taxonomy and information describing pre-defined scenario types, scenarios (or scenario instances) included within those scenario types as well as related attributes and attribute values, and respective features that can be used to identify a given scenario. For example, a scenario type corresponding to “pedestrian actions” can include scenarios such as a child running across a road and people jogging along a road, to name some examples. In these examples, the scenario “child running across the road” may be associated with features that can be used to recognize the scenario, such as the presence of an object corresponding to a child, a speed at which the object is traveling (e.g., 2-4 miles per hour), and a direction in which the object is traveling (e.g., a path substantially orthogonal to the path of the road). Many different scenarios based on different features are possible. The sensor data module404can be configured to obtain sensor data corresponding to a road segment to be categorized. For example, the sensor data may include data captured by one or more sensors including optical cameras, LiDAR, radar, infrared cameras, and ultrasound equipment, to name some examples. The sensor data module404can obtain such sensor data, for example, from the data store220or directly from sensors associated with a vehicle in real-time. In some instances, the obtained sensor data may have been collected by a driver-operated vehicle included in a fleet of vehicles that offer ridesharing services. For example, in some embodiments, the driver-operated vehicle may include a computing device (e.g., mobile phone) that includes one or more integrated sensors (e.g., a global positioning system (GPS), compass, gyroscope(s), accelerometer(s), and inertial measurement unit(s)) that can be used to capture information describing a given road segment. In some embodiments, the sensor data module404can determine contextual information for sensor data such as a respective calendar date, day of week, and time of day during which the sensor data was captured. Such contextual information may be obtained from an internal clock of a sensor or a computing device, one or more external computing systems (e.g., Network Time Protocol (NTP) servers), or GPS data, to name some examples. More details describing the types of sensor data that may be obtained by the sensor data module404are provided below in connection with an array of sensors944ofFIG.9. The feature determination module406can be configured to determine features that correspond to a road segment being categorized. Such features can include, for example, objects detected on the road segment, road features corresponding to the road segment, and contextual features describing the road segment. For example, in some embodiments, the feature determination module406can analyze sensor data obtained by the sensor data module404to identify objects detected on or along the road segment being categorized. When identifying features such as objects, the feature determination module406can apply generally known object detection and recognition techniques. The identified objects can include, for example, pedestrians, vehicles, lane markings, curbs, trees, animals, debris, etc. In some embodiments, the feature determination module406can determine respective attributes for each of the identified objects. For example, upon detecting a pedestrian, the feature determination module406can determine attributes related to the pedestrian. In this example, the attributes can include a distance between the pedestrian and a vehicle that is sensing (or detecting) the pedestrian, a velocity at which the pedestrian is traveling, and a direction in which the pedestrian is traveling, to name some examples. In some embodiments, the attributes can also describe the vehicle that is sensing (or detecting) the pedestrian including, for example, a velocity at which the vehicle is traveling, a direction in which the vehicle is traveling, and a lane within which the vehicle is traveling. The feature determination module406can also determine road features corresponding to a road segment being categorized. As mentioned, these road features can be used to determine (or predict) scenarios for the road segment. In some embodiments, such road features may be determined from sensor data obtained from, for example, the sensor data module404, location data (e.g., labeled map data, GPS data, etc.), or a combination thereof. For example, in some embodiments, the feature determination module406can determine road features such as road segment length (e.g., a start point and an end point that defines a road segment), road segment quality (e.g., presence of potholes, whether the road segment is paved or unpaved, etc.), roadway type (e.g., freeway, highway, expressway, local street, rural road, etc.), information describing traffic lanes in the road segment (e.g., speed limits, number of available lanes, number of closed lanes, locations of any intersections, merging lanes, traffic signals, street signs, curbs, etc.), the presence of any bike lanes, and the presence of any crosswalks, to name some examples. In some embodiments, the feature determination module406can also determine whether the road segment is within a specific zone (e.g., residential zone, school zone, business zone, mixed-use zone, high density zone, rural zone, etc.), for example, based on detected street signs and location data. The feature determination module406can also determine contextual features that correspond to a road segment being categorized. As mentioned, the contextual features can be used to determine (or predict) scenarios for the road segment. In some embodiments, such contextual features may be determined from sensor data obtained from, for example, the sensor data module404, external data sources (e.g., weather data, etc.), or a combination thereof. For example, in some embodiments, the feature determination module406can analyze sensor data (e.g., images, videos, LiDAR data, radar data, etc.) corresponding to a road segment being categorized. In such embodiments, the feature determination module406can determine contextual features based on the sensor data. For example, in some embodiments, the feature determination module406can determine a respective calendar date, day of week, and time of day during which the sensor data was captured. In some embodiments, the feature determination module406can determine weather conditions (e.g., clear skies, overcast, fog, rain, sleet, snow, etc.) encountered while navigating the road segment based on the sensor data. The scenario determination module408can be configured to determine (or predict) scenarios for a road segment being categorized. For example, the scenario determination module408can determine (or predict) scenarios for a road segment based on features determined for the road segment by the feature determination module406. In some embodiments, the scenario determination module408determines (or predicts) scenarios based on pre-defined rules. In such embodiments, the scenario determination module408can determine whether features associated with a road segment being categorized match pre-defined features associated with a given scenario. In some embodiments, a road segment can be associated with a scenario when all of the features associated with the road segment match features associated with the scenario. For example, assume a first scenario for “School Bus Stopping” is associated with features of a school bus with active hazard lights along with the presence of a stop sign. Assume further that sensor data for a road segment indicates the presence of features corresponding to a school bus with its hazard lights in use and the presence of a stop sign. In this example, the scenario determination module408may determine (or predict) that the presence of the school bus with active hazard lights and the presence of the stop sign match the features associated with the first scenario. In some instances, a scenario can be associated with a road segment even if all features associated with the road segment do not exactly match all features associated with the scenario. For example, in some embodiments, a road segment can be associated with a scenario when a threshold level of similarity is determined between features associated with the road segment and features associated with the scenario. For example, when a threshold number of features associated with a road segment and features associated with a scenario match, the road segment can be associated with the scenario. Many variations are possible. Other approaches for determining (or predicting) scenarios for road segments are contemplated by the present technology. For example, in some embodiments, a machine learning model can be trained to predict scenarios for a road segment based on features determined for the road segment. As another example, in various embodiments, features determined for a road segment can be represented as a vector. Similarly, features associated with a scenario can also be represented as a vector. In such embodiments, the road segment can be associated with the scenario based on satisfaction of a threshold level of similarity (e.g., cosine similarity) between their vector representations. The scenario mapping module410can associate road segments with their respective scenarios. Associations between a road segment and its respective scenarios can be determined (or predicted) by the scenario determination module408, as discussed. In some embodiments, information describing associations between a road segment and its corresponding one or more scenarios can be stored, for example, in the data store220. FIG.5illustrates an example segment similarity module502, according to an embodiment of the present technology. In some embodiments, the segment similarity module356ofFIG.3Bcan be implemented with the segment similarity module502. The segment similarity module502can be configured to determine one or more road segment types that have a threshold level of similarity to a road segment being categorized. In some embodiments, the road segment can be categorized as a road segment type that is most similar to the road segment, for example, as determined based on a comparison of scenarios determined for the road segment and scenarios associated with the road segment type. Such scenario comparisons may be performed using various approaches. As shown in the example ofFIG.5, the segment similarity module502can include a rules module504and a machine learning module506. The rules module504can be configured to determine one or more road segment types that have a threshold level of similarity to a road segment being categorized based on pre-defined rules. In some embodiments, the road segment may be determined to have a threshold level of similarity to a road segment type when at least one scenario determined for the road segment matches at least one scenario included in a scenario type associated with the road segment type or otherwise falls within the scope of the scenario type. As used herein, a “scenario type” can include instances of corresponding scenarios, any attribute types related to those scenarios, and any attribute values related to those attribute types. In other embodiments, a threshold level of similarity determination may be made when scenarios determined for the road segment match a threshold number of scenarios included in a scenario type associated with the road segment type. For example, a road segment type may be associated with a first scenario type that includes a first scenario (e.g., a person bicycling across the road segment), a second scenario (e.g., a jaywalker crossing the road segment), and a third scenario (e.g., animals crossing the road segment). A road segment being categorized may be associated with the first scenario (e.g., a person bicycling across the road segment) and the second scenario (e.g., a jaywalker crossing the road segment) but not the third scenario (e.g., animals crossing the road segment). In this example, if a threshold level of similarity requires a match of at least two scenarios as between a road segment being categorized and a road segment type, a threshold level of similarity between the road segment and road segment type may be determined to exist despite the road segment not being associated with the third scenario (e.g., animals crossing the road segment). Many variations are possible. For example, in some embodiments, a road segment type for a road segment may be determined when a scenario exposure determined for the road segment matches a scenario exposure associated with the road segment type with a threshold level of similarity. For example, assume a road segment has a scenario exposure of 15 jaywalkers per mile and a road segment type has a scenario exposure of 14 jaywalkers per mile. In this example, the road segment can be categorized as the road segment type assuming a threshold level of similarity is satisfied between the scenario exposure of the road segment and the scenario exposure of the road segment type. In some embodiments, a road segment type for a road segment may be determined when an aggregate scenario exposure determined for the road segment matches an aggregate scenario exposure associated with the road segment type with a threshold level of similarity. For example, assume a road segment has a first scenario exposure of 15 jaywalkers per mile and a second scenario exposure of 9 roaming animals per mile. Further, assume a road segment type has a first scenario exposure of 14 jaywalkers per mile and a second scenario exposure of 10 roaming animals per mile. In this example, an aggregate scenario exposure for the road segment can be determined based on an aggregation or combination of the first scenario exposure and the second scenario exposure associated with the road segment. Similarly, an aggregate scenario exposure for the road segment type can be determined based on an aggregation or combination of the first scenario exposure and the second scenario exposure associated with the road segment type. In this example, the road segment can be categorized as the road segment type assuming a threshold level of similarity is satisfied between the aggregate scenario exposure of the road segment and the aggregate scenario exposure of the road segment type. Again, many variations are possible. The machine learning module506can be configured to determine road segment types that are similar to a road segment being categorized based on machine learning techniques. For example, in some embodiments, a machine learning model can be trained to predict road segment types that are similar to a given road segment to be categorized based on scenarios associated with the road segment. In such embodiments, scenarios associated with the road segment can be provided as inputs to the machine learning model. The machine learning model can evaluate these inputs to determine (or predict) a road segment type that best represents the road segment. Many variations are possible. FIG.6Aillustrates example scenario types604determined (or predicted) for a road segment602, according to an embodiment of the present technology. The example scenario types604may be determined by the road segment classification module202ofFIG.2. In some embodiments, scenario types can be organized as a multi-level or tiered taxonomy reflecting varying degrees of generality and specificity. An example taxonomy may include a set of pre-defined scenario types and respective scenarios classified within each of the scenario types. As shown, the road segment602is associated with a “Vehicle Action” scenario type606, a “Pedestrian Action” scenario type610, and a “Pedestrian Type” scenario type614. The “Vehicle Action” scenario type606includes a scenario608corresponding to “School Bus Stopping”. In this example, the scenario608is associated with a set of features (“School Bus”, “Hazard Lights Active”, “Crossing Guard”) that can be used to recognize the scenario608. In the example, the road segment602is also associated with the “Pedestrian Action” scenario type610which includes a scenario612corresponding to “Pedestrian Crossing Roadway Properly”. The scenario612is associated with a set of features (“Pedestrian in crosswalk”, “Crosswalk”, “Crosswalk Street Sign”) that can be used to recognize the scenario612. In the example, the road segment602is also associated with the “Pedestrian Type” scenario type614which includes a scenario616corresponding to “Pedestrian with Baby Stroller”. The scenario616is associated with a set of features (“Pedestrian”, “Baby Stroller”) that can be used to recognize the scenario616. For ease of explanation,FIG.6Ashows example portions of the taxonomy organizing scenario types and respective scenarios. However, the taxonomy of the present technology can contain a large number (e.g., hundreds, thousands, etc.) of different scenario types and respective scenarios (or attribute types with attribute values) identified by a wide variety of features. Further, the taxonomy can designate scenario types and scenarios at varying levels of abstraction. As just one example, the example scenario types604illustrated inFIG.6Aalternatively can be characterized as scenarios or attribute types of a category corresponding to a broader scenario type, such as an “Experience Scenarios” scenario type. Many variations are possible. FIG.6Billustrates example similarity mappings between various types of geographies, according to an embodiment of the present technology. The example similarity mappings may be determined by the road segment classification module202ofFIG.2. For example, in some embodiments, a road segment A654and a road segment B656can be determined to be similar based on their respective road segment type classifications. A road segment can be classified as a given road segment type based on scenarios determined for the road segment and scenario types associated with the road segment type, as described above. In some embodiments, similarities between different geographic locations or areas can be determined based on the types of road segments included within those geographic locations. For example, a city664may include road segments that correspond to a set of road segment types. A city666may also include road segments that correspond to a set of road segment types. In this example, a similarity between the city664and the city666can be determined based on an amount of overlap between road segment types included in the city664and road segment types included in the city666. For example, the city664may have a particular mix of different types of road segments (e.g., urban one way streets with stop lights, double wide roads with sporadic pedestrian traffic in the morning, an interstate with heavy semi-truck traffic, etc.) that may overlap with (or may be completely different from) a mix of different types of road segments included in the city666. In various embodiments, such comparisons can be used to determine similarities between other types or abstractions of geographic regions in general, such as zip codes, counties, states, or other defined geographic regions674,676. Further, when determined similarities between geographic regions satisfy an applicable threshold level of similarity, certain information known about some geographic regions can be assigned or applied to other geographic regions for which such information was not previously known, as described above in reference to the application module208ofFIG.2. FIG.7Aillustrates an example multi-phase process700for determining road segment similarity, according to an embodiment of the present technology. In a first phase, sensor data704collected by a vehicle702(e.g., the vehicle940ofFIG.9) for a road segment can be used to determine features706describing that road segment. In an embodiment, the sensor data704can be processed on-board the vehicle702for scenario classification or feature identification, or both. In an embodiment, the sensor data704can be processed on-board the vehicle702and the processed sensor data704can be sent to an off-board computing system (e.g., the transportation management system960ofFIG.9) for scenario classification or feature identification, or both. In an embodiment, the vehicle702can process the sensor data704on-board only for scenario classification while the off-board computing system determines feature identification. In an embodiment, the vehicle702can process the sensor data704on-board only for feature identification while the off-board computing system determines scenario classification. In an embodiment, both raw sensor data and processed sensor data can be used for purposes of scenario classification and feature identification. In an embodiment, the sensor data704can be provided to the off-board computing system for scenario classification or feature identification, or both. These features706can include myriad features (e.g., road features, contextual features, etc.) as described above. In some embodiments, the features706can be logged in a scenario information database708. In this regard, the features706can be associated with their corresponding road segment, and the features706, their corresponding road segment, and their association can be maintained in the scenario information database708. Other information can also be logged. For example, in some embodiments, the sensor data704and its association with the corresponding road segment can be logged in the scenario information database708. In some embodiments, any scenario classification and identification information determined for the road segment based on the sensor data704can also be logged in the scenario information database708. In various embodiments, the scenario information database708can store information describing associations between classified road segments and corresponding road feature data, scenarios data, contextual features data, and risk profiles, to name some examples. In various embodiments, the scenario information database708can continually be updated as new information is logged for road segments. For example, the scenario information database708may store information for a given road segment including, for example, sensor data collected by a first vehicle, features determined from the sensor data, and scenarios determined for the road segment based on such information. In this example, the scenario information database708can be updated to include additional information for the road segment as subsequently captured by one or more different vehicles. For example, the scenario information database708can be updated to include different sensor data captured by a second vehicle while driving the road segment, different features determined from the sensor data, and different scenarios determined for the road segment based on this different information. In some embodiments, sensor data and other related information (e.g., features, scenarios) as captured and determined by different vehicles can be aggregated, for example, to more accurately determine scenario exposure and risk for a given road segment. In a second phase, information stored in the scenario information database708can be used to determine road segment similarity in real time or near real time. For example, features716can be determined for an unclassified road segment712, for example, based on its corresponding sensor data714. In an embodiment, the sensor data714can optionally be acquired by a vehicle traveling along the unclassified road segment. In another embodiment, the sensor data714may be obtained from another source (e.g., another vehicle, a remote database, etc.). These features716can be used to determine one or more similar road segments718that satisfy a threshold similarity to the unclassified road segment712. In some embodiments, at block718, the similar road segments can be identified based on information that is stored and accessible from the scenario information database708. For example, in some embodiments, the features716for the unclassified road segment can be represented as a vector. Similarly, features stored in the scenario information database708for other road segments can also be represented as respective vectors. In such embodiments, the unclassified road segment and a classified road segment can be determined to be similar based on satisfaction of a threshold level of similarity (e.g., cosine similarity) between their vector representations. In some embodiments, once a similarity between the unclassified road segment and a classified road segment is determined, information associated with the classified road segment can be mapped to the unclassified road segment at block720. For example, a risk profile, which can be based on scenario exposure rates for the classified road segment, can be associated with the unclassified road segment. As stated, one or more of the steps, processing, functionalities, or modules of the present technology can be performed or implemented in the vehicle940or in the transportation management system960, or both. FIG.7Billustrates an example similarity determination750, according to an embodiment of the present technology. InFIG.7B, a determination is made that an unclassified road segment752has a threshold level of similarity to a classified road segment754. In some embodiments, this similarity determination can be made based on a comparison of features (e.g., road features, contextual features, map data, visual data, etc.) associated with the unclassified road segment752and the classified road segment754. In some embodiments, once a threshold level of similarity is determined, information associated with the classified road segment754can also be associated with the unclassified road segment752. In the example ofFIG.7B, a risk profile758for the classified road segment754can be associated with the unclassified road segment752. In some embodiments, the risk profile758can be based on scenario exposure rates756for the classified road segment754. In an embodiment, the scenario exposure rates756can describe relevant scenarios encountered on the road segment754, their characterization, and their frequency. Many variations are possible. FIG.8Aillustrates an example method800, according to an embodiment of the present technology. At block802, a road segment can be determined. At block804, a set of features associated with the road segment can be determined based at least in part on data captured by one or more sensors of a vehicle. At block806, a level of similarity between the road segment and each of a set of road segment types can be determined by comparing the set of features to features associated with each of the set of road segment types. At block808, the road segment can be classified as the road segment type based on the level of similarity. At block810, scenario information associated with the road segment can be determined based on the classified road segment type. Many variations to the example method are possible. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. FIG.8Billustrates an example method850, according to an embodiment of the present technology. At block852, a set of features associated with a road segment is determined based at least in part on data captured by one or more sensors of a vehicle. At block854, at least one scenario that is associated with the set of features is determined. At block856, the at least one scenario is associated with the road segment. At block858, the associated at least one scenario and road segment are maintained in a scenario information database. Many variations to the example method are possible. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. FIG.9illustrates an example block diagram of a transportation management environment for matching ride requestors with vehicles. In particular embodiments, the environment may include various computing entities, such as a user computing device930of a user901(e.g., a ride provider or requestor), a transportation management system960, a vehicle940, and one or more third-party systems970. The vehicle940can be autonomous, semi-autonomous, or manually drivable. The computing entities may be communicatively connected over any suitable network910. As an example and not by way of limitation, one or more portions of network910may include an ad hoc network, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of Public Switched Telephone Network (PSTN), a cellular network, or a combination of any of the above. In particular embodiments, any suitable network arrangement and protocol enabling the computing entities to communicate with each other may be used. AlthoughFIG.9illustrates a single user device930, a single transportation management system960, a single vehicle940, a plurality of third-party systems970, and a single network910, this disclosure contemplates any suitable number of each of these entities. As an example and not by way of limitation, the network environment may include multiple users901, user devices930, transportation management systems960, vehicles940, third-party systems970, and networks910. In some embodiments, some or all modules of the road segment classification module202may be implemented by one or more computing systems of the transportation management system960. In some embodiments, some or all modules of the road segment classification module202may be implemented by one or more computing systems in the vehicle940. The user device930, transportation management system960, vehicle940, and third-party system970may be communicatively connected or co-located with each other in whole or in part. These computing entities may communicate via different transmission technologies and network types. For example, the user device930and the vehicle940may communicate with each other via a cable or short-range wireless communication (e.g., Bluetooth, NFC, WI-FI, etc.), and together they may be connected to the Internet via a cellular network that is accessible to either one of the devices (e.g., the user device930may be a smartphone with LTE connection). The transportation management system960and third-party system970, on the other hand, may be connected to the Internet via their respective LAN/WLAN networks and Internet Service Providers (ISP).FIG.9illustrates transmission links950that connect user device930, vehicle940, transportation management system960, and third-party system970to communication network910. This disclosure contemplates any suitable transmission links950, including, e.g., wire connections (e.g., USB, Lightning, Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless connections (e.g., WI-FI, WiMAX, cellular, satellite, NFC, Bluetooth), optical connections (e.g., Synchronous Optical Networking (SONET), Synchronous Digital Hierarchy (SDH)), any other wireless communication technologies, and any combination thereof. In particular embodiments, one or more links950may connect to one or more networks910, which may include in part, e.g., ad-hoc network, the Intranet, extranet, VPN, LAN, WLAN, WAN, WWAN, MAN, PSTN, a cellular network, a satellite network, or any combination thereof. The computing entities need not necessarily use the same type of transmission link950. For example, the user device930may communicate with the transportation management system via a cellular network and the Internet, but communicate with the vehicle940via Bluetooth or a physical wire connection. In particular embodiments, the transportation management system960may fulfill ride requests for one or more users901by dispatching suitable vehicles. The transportation management system960may receive any number of ride requests from any number of ride requestors901. In particular embodiments, a ride request from a ride requestor901may include an identifier that identifies the ride requestor in the system960. The transportation management system960may use the identifier to access and store the ride requestor's901information, in accordance with the requestor's901privacy settings. The ride requestor's901information may be stored in one or more data stores (e.g., a relational database system) associated with and accessible to the transportation management system960. In particular embodiments, ride requestor information may include profile information about a particular ride requestor901. In particular embodiments, the ride requestor901may be associated with one or more categories or types, through which the ride requestor901may be associated with aggregate information about certain ride requestors of those categories or types. Ride information may include, for example, preferred pick-up and drop-off locations, driving preferences (e.g., safety comfort level, preferred speed, rates of acceleration/deceleration, safety distance from other vehicles when travelling at various speeds, route, etc.), entertainment preferences and settings (e.g., preferred music genre or playlist, audio volume, display brightness, etc.), temperature settings, whether conversation with the driver is welcomed, frequent destinations, historical riding patterns (e.g., time of day of travel, starting and ending locations, etc.), preferred language, age, gender, or any other suitable information. In particular embodiments, the transportation management system960may classify a user901based on known information about the user901(e.g., using machine-learning classifiers), and use the classification to retrieve relevant aggregate information associated with that class. For example, the system960may classify a user901as a young adult and retrieve relevant aggregate information associated with young adults, such as the type of music generally preferred by young adults. Transportation management system960may also store and access ride information. Ride information may include locations related to the ride, traffic data, route options, optimal pick-up or drop-off locations for the ride, or any other suitable information associated with a ride. As an example and not by way of limitation, when the transportation management system960receives a request to travel from San Francisco International Airport (SFO) to Palo Alto, California, the system960may access or generate any relevant ride information for this particular ride request. The ride information may include, for example, preferred pick-up locations at SFO; alternate pick-up locations in the event that a pick-up location is incompatible with the ride requestor (e.g., the ride requestor may be disabled and cannot access the pick-up location) or the pick-up location is otherwise unavailable due to construction, traffic congestion, changes in pick-up/drop-off rules, or any other reason; one or more routes to navigate from SFO to Palo Alto; preferred off-ramps for a type of user; or any other suitable information associated with the ride. In particular embodiments, portions of the ride information may be based on historical data associated with historical rides facilitated by the system960. For example, historical data may include aggregate information generated based on past ride information, which may include any ride information described herein and telemetry data collected by sensors in vehicles and user devices. Historical data may be associated with a particular user (e.g., that particular user's preferences, common routes, etc.), a category/class of users (e.g., based on demographics), and all users of the system960. For example, historical data specific to a single user may include information about past rides that particular user has taken, including the locations at which the user is picked up and dropped off, music the user likes to listen to, traffic information associated with the rides, time of the day the user most often rides, and any other suitable information specific to the user. As another example, historical data associated with a category/class of users may include, e.g., common or popular ride preferences of users in that category/class, such as teenagers preferring pop music, ride requestors who frequently commute to the financial district may prefer to listen to the news, etc. As yet another example, historical data associated with all users may include general usage trends, such as traffic and ride patterns. Using historical data, the system960in particular embodiments may predict and provide ride suggestions in response to a ride request. In particular embodiments, the system960may use machine-learning, such as neural networks, regression algorithms, instance-based algorithms (e.g., k-Nearest Neighbor), decision-tree algorithms, Bayesian algorithms, clustering algorithms, association-rule-learning algorithms, deep-learning algorithms, dimensionality-reduction algorithms, ensemble algorithms, and any other suitable machine-learning algorithms known to persons of ordinary skill in the art. The machine-learning models may be trained using any suitable training algorithm, including supervised learning based on labeled training data, unsupervised learning based on unlabeled training data, and semi-supervised learning based on a mixture of labeled and unlabeled training data. In particular embodiments, transportation management system960may include one or more server computers. Each server may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. The servers may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by the server. In particular embodiments, transportation management system960may include one or more data stores. The data stores may be used to store various types of information, such as ride information, ride requestor information, ride provider information, historical information, third-party information, or any other suitable type of information. In particular embodiments, the information stored in the data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or any other suitable type of database system. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a user device930(which may belong to a ride requestor or provider), a transportation management system960, vehicle system940, or a third-party system970to process, transform, manage, retrieve, modify, add, or delete the information stored in the data store. In particular embodiments, transportation management system960may include an authorization server (or any other suitable component(s)) that allows users901to opt-in to or opt-out of having their information and actions logged, recorded, or sensed by transportation management system960or shared with other systems (e.g., third-party systems970). In particular embodiments, a user901may opt-in or opt-out by setting appropriate privacy settings. A privacy setting of a user may determine what information associated with the user may be logged, how information associated with the user may be logged, when information associated with the user may be logged, who may log information associated with the user, whom information associated with the user may be shared with, and for what purposes information associated with the user may be logged or shared. Authorization servers may be used to enforce one or more privacy settings of the users901of transportation management system960through blocking, data hashing, anonymization, or other suitable techniques as appropriate. In particular embodiments, third-party system970may be a network-addressable computing system that may provide HD maps or host GPS maps, customer reviews, music or content, weather information, or any other suitable type of information. Third-party system970may generate, store, receive, and send relevant data, such as, for example, map data, customer review data from a customer review website, weather data, or any other suitable type of data. Third-party system970may be accessed by the other computing entities of the network environment either directly or via network910. For example, user device930may access the third-party system970via network910, or via transportation management system960. In the latter case, if credentials are required to access the third-party system970, the user901may provide such information to the transportation management system960, which may serve as a proxy for accessing content from the third-party system970. In particular embodiments, user device930may be a mobile computing device such as a smartphone, tablet computer, or laptop computer. User device930may include one or more processors (e.g., CPU, GPU), memory, and storage. An operating system and applications may be installed on the user device930, such as, e.g., a transportation application associated with the transportation management system960, applications associated with third-party systems970, and applications associated with the operating system. User device930may include functionality for determining its location, direction, or orientation, based on integrated sensors such as GPS, compass, gyroscope, or accelerometer. User device930may also include wireless transceivers for wireless communication and may support wireless communication protocols such as Bluetooth, near-field communication (NFC), infrared (IR) communication, WI-FI, and 2G/3G/4G/LTE mobile communication standard. User device930may also include one or more cameras, scanners, touchscreens, microphones, speakers, and any other suitable input-output devices. In particular embodiments, the vehicle940may be equipped with an array of sensors944, a navigation system946, and a ride-service computing device948. In particular embodiments, a fleet of vehicles940may be managed by the transportation management system960. The fleet of vehicles940, in whole or in part, may be owned by the entity associated with the transportation management system960, or they may be owned by a third-party entity relative to the transportation management system960. In either case, the transportation management system960may control the operations of the vehicles940, including, e.g., dispatching select vehicles940to fulfill ride requests, instructing the vehicles940to perform select operations (e.g., head to a service center or charging/fueling station, pull over, stop immediately, self-diagnose, lock/unlock compartments, change music station, change temperature, and any other suitable operations), and instructing the vehicles940to enter select operation modes (e.g., operate normally, drive at a reduced speed, drive under the command of human operators, and any other suitable operational modes). In particular embodiments, the vehicles940may receive data from and transmit data to the transportation management system960and the third-party system970. Examples of received data may include, e.g., instructions, new software or software updates, maps, 3D models, trained or untrained machine-learning models, location information (e.g., location of the ride requestor, the vehicle940itself, other vehicles940, and target destinations such as service centers), navigation information, traffic information, weather information, entertainment content (e.g., music, video, and news) ride requestor information, ride information, and any other suitable information. Examples of data transmitted from the vehicle940may include, e.g., telemetry and sensor data, determinations/decisions based on such data, vehicle condition or state (e.g., battery/fuel level, tire and brake conditions, sensor condition, speed, odometer, etc.), location, navigation data, passenger inputs (e.g., through a user interface in the vehicle940, passengers may send/receive data to the transportation management system960and third-party system970), and any other suitable data. In particular embodiments, vehicles940may also communicate with each other, including those managed and not managed by the transportation management system960. For example, one vehicle940may communicate with another vehicle data regarding their respective location, condition, status, sensor reading, and any other suitable information. In particular embodiments, vehicle-to-vehicle communication may take place over direct short-range wireless connection (e.g., WI-FI, Bluetooth, NFC) or over a network (e.g., the Internet or via the transportation management system960or third-party system970), or both. In particular embodiments, a vehicle940may obtain and process sensor/telemetry data. Such data may be captured by any suitable sensors. For example, the vehicle940may have a Light Detection and Ranging (LiDAR) sensor array of multiple LiDAR transceivers that are configured to rotate 360°, emitting pulsed laser light and measuring the reflected light from objects surrounding vehicle940. In particular embodiments, LiDAR transmitting signals may be steered by use of a gated light valve, which may be a MEMs device that directs a light beam using the principle of light diffraction. Such a device may not use a gimbaled mirror to steer light beams in 360° around the vehicle. Rather, the gated light valve may direct the light beam into one of several optical fibers, which may be arranged such that the light beam may be directed to many discrete positions around the vehicle. Thus, data may be captured in 360° around the vehicle, but no rotating parts may be necessary. A LiDAR is an effective sensor for measuring distances to targets, and as such may be used to generate a three-dimensional (3D) model of the external environment of the vehicle940. As an example and not by way of limitation, the 3D model may represent the external environment including objects such as other cars, curbs, debris, objects, and pedestrians up to a maximum range of the sensor arrangement (e.g., 50, 100, or 200 meters). As another example, the vehicle940may have optical cameras pointing in different directions. The cameras may be used for, e.g., recognizing roads, lane markings, street signs, traffic lights, police, other vehicles, and any other visible objects of interest. To enable the vehicle940to “see” at night, infrared cameras may be installed. In particular embodiments, the vehicle may be equipped with stereo vision for, e.g., spotting hazards such as pedestrians or tree branches on the road. As another example, the vehicle940may have radars for, e.g., detecting other vehicles and hazards afar. Furthermore, the vehicle940may have ultrasound equipment for, e.g., parking and obstacle detection. In addition to sensors enabling the vehicle940to detect, measure, and understand the external world around it, the vehicle940may further be equipped with sensors for detecting and self-diagnosing the vehicle's own state and condition. For example, the vehicle940may have wheel sensors for, e.g., measuring velocity; global positioning system (GPS) for, e.g., determining the vehicle's current geolocation; and inertial measurement units, accelerometers, gyroscopes, and odometer systems for movement or motion detection. While the description of these sensors provides particular examples of utility, one of ordinary skill in the art would appreciate that the utilities of the sensors are not limited to those examples. Further, while an example of a utility may be described with respect to a particular type of sensor, it should be appreciated that the utility may be achieved using any combination of sensors. For example, the vehicle940may build a 3D model of its surrounding based on data from its LiDAR, radar, sonar, and cameras, along with a pre-generated map obtained from the transportation management system960or the third-party system970. Although sensors944appear in a particular location on the vehicle940inFIG.9, sensors944may be located in any suitable location in or on the vehicle940. Example locations for sensors include the front and rear bumpers, the doors, the front windshield, on the side panel, or any other suitable location. In particular embodiments, the vehicle940may be equipped with a processing unit (e.g., one or more CPUs and GPUs), memory, and storage. The vehicle940may thus be equipped to perform a variety of computational and processing tasks, including processing the sensor data, extracting useful information, and operating accordingly. For example, based on images captured by its cameras and a machine-vision model, the vehicle940may identify particular types of objects captured by the images, such as pedestrians, other vehicles, lanes, curbs, and any other objects of interest. In particular embodiments, the vehicle940may have a navigation system946responsible for safely navigating the vehicle940. In particular embodiments, the navigation system946may take as input any type of sensor data from, e.g., a Global Positioning System (GPS) module, inertial measurement unit (IMU), LiDAR sensors, optical cameras, radio frequency (RF) transceivers, or any other suitable telemetry or sensory mechanisms. The navigation system946may also utilize, e.g., map data, traffic data, accident reports, weather reports, instructions, target destinations, and any other suitable information to determine navigation routes and particular driving operations (e.g., slowing down, speeding up, stopping, swerving, etc.). In particular embodiments, the navigation system946may use its determinations to control the vehicle940to operate in prescribed manners and to guide the vehicle940to its destinations without colliding into other objects. Although the physical embodiment of the navigation system946(e.g., the processing unit) appears in a particular location on the vehicle940inFIG.9, navigation system946may be located in any suitable location in or on the vehicle940. Example locations for navigation system946include inside the cabin or passenger compartment of the vehicle940, near the engine/battery, near the front seats, rear seats, or in any other suitable location. In particular embodiments, the vehicle940may be equipped with a ride-service computing device948, which may be a tablet or any other suitable device installed by transportation management system960to allow the user to interact with the vehicle940, transportation management system960, other users901, or third-party systems970. In particular embodiments, installation of ride-service computing device948may be accomplished by placing the ride-service computing device948inside the vehicle940, and configuring it to communicate with the vehicle940via a wired or wireless connection (e.g., via Bluetooth). AlthoughFIG.9illustrates a single ride-service computing device948at a particular location in the vehicle940, the vehicle940may include several ride-service computing devices948in several different locations within the vehicle. As an example and not by way of limitation, the vehicle940may include four ride-service computing devices948located in the following places: one in front of the front-left passenger seat (e.g., driver's seat in traditional U.S. automobiles), one in front of the front-right passenger seat, one in front of each of the rear-left and rear-right passenger seats. In particular embodiments, ride-service computing device948may be detachable from any component of the vehicle940. This may allow users to handle ride-service computing device948in a manner consistent with other tablet computing devices. As an example and not by way of limitation, a user may move ride-service computing device948to any location in the cabin or passenger compartment of the vehicle940, may hold ride-service computing device948, or handle ride-service computing device948in any other suitable manner. Although this disclosure describes providing a particular computing device in a particular manner, this disclosure contemplates providing any suitable computing device in any suitable manner. FIG.10illustrates an example computer system1000. In particular embodiments, one or more computer systems1000perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems1000provide the functionalities described or illustrated herein. In particular embodiments, software running on one or more computer systems1000performs one or more steps of one or more methods described or illustrated herein or provides the functionalities described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems1000. Herein, a reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, a reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems1000. This disclosure contemplates computer system1000taking any suitable physical form. As example and not by way of limitation, computer system1000may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system1000may include one or more computer systems1000; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems1000may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems1000may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems1000may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system1000includes a processor1002, memory1004, storage1006, an input/output (I/O) interface1008, a communication interface1010, and a bus1012. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor1002includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor1002may retrieve (or fetch) the instructions from an internal register, an internal cache, memory1004, or storage1006; decode and execute them; and then write one or more results to an internal register, an internal cache, memory1004, or storage1006. In particular embodiments, processor1002may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor1002including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor1002may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory1004or storage1006, and the instruction caches may speed up retrieval of those instructions by processor1002. Data in the data caches may be copies of data in memory1004or storage1006that are to be operated on by computer instructions; the results of previous instructions executed by processor1002that are accessible to subsequent instructions or for writing to memory1004or storage1006; or any other suitable data. The data caches may speed up read or write operations by processor1002. The TLBs may speed up virtual-address translation for processor1002. In particular embodiments, processor1002may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor1002including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor1002may include one or more arithmetic logic units (ALUs), be a multi-core processor, or include one or more processors1002. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory1004includes main memory for storing instructions for processor1002to execute or data for processor1002to operate on. As an example and not by way of limitation, computer system1000may load instructions from storage1006or another source (such as another computer system1000) to memory1004. Processor1002may then load the instructions from memory1004to an internal register or internal cache. To execute the instructions, processor1002may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor1002may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor1002may then write one or more of those results to memory1004. In particular embodiments, processor1002executes only instructions in one or more internal registers or internal caches or in memory1004(as opposed to storage1006or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory1004(as opposed to storage1006or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor1002to memory1004. Bus1012may include one or more memory buses, as described in further detail below. In particular embodiments, one or more memory management units (MMUs) reside between processor1002and memory1004and facilitate accesses to memory1004requested by processor1002. In particular embodiments, memory1004includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory1004may include one or more memories1004, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage1006includes mass storage for data or instructions. As an example and not by way of limitation, storage1006may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage1006may include removable or non-removable (or fixed) media, where appropriate. Storage1006may be internal or external to computer system1000, where appropriate. In particular embodiments, storage1006is non-volatile, solid-state memory. In particular embodiments, storage1006includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage1006taking any suitable physical form. Storage1006may include one or more storage control units facilitating communication between processor1002and storage1006, where appropriate. Where appropriate, storage1006may include one or more storages1006. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface1008includes hardware or software, or both, providing one or more interfaces for communication between computer system1000and one or more I/O devices. Computer system1000may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system1000. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces1008for them. Where appropriate, I/O interface1008may include one or more device or software drivers enabling processor1002to drive one or more of these I/O devices. I/O interface1008may include one or more I/O interfaces1008, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface1010includes hardware or software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system1000and one or more other computer systems1000or one or more networks. As an example and not by way of limitation, communication interface1010may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or any other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface1010for it. As an example and not by way of limitation, computer system1000may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system1000may communicate with a wireless PAN (WPAN) (such as, for example, a Bluetooth WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these. Computer system1000may include any suitable communication interface1010for any of these networks, where appropriate. Communication interface1010may include one or more communication interfaces1010, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus1012includes hardware or software, or both coupling components of computer system1000to each other. As an example and not by way of limitation, bus1012may include an Accelerated Graphics Port (AGP) or any other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus1012may include one or more buses1012, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other types of integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A or B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. Methods described herein may vary in accordance with the present disclosure. Various embodiments of this disclosure may repeat one or more steps of the methods described herein, where appropriate. Although this disclosure describes and illustrates particular steps of certain methods as occurring in a particular order, this disclosure contemplates any suitable steps of the methods occurring in any suitable order or in any combination which may include all, some, or none of the steps of the methods. Furthermore, although this disclosure may describe and illustrate particular components, devices, or systems carrying out particular steps of a method, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, modules, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, modules, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 111,112 |
11858504 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims. Autonomous Vehicle Overview As used throughout this disclosure, the term “autonomous vehicle” refers to a vehicle capable of implementing at least one navigational change without driver input. A “navigational change” refers to a change in one or more of steering, braking, or acceleration of the vehicle. To be autonomous, a vehicle need not be fully automatic (e.g., fully operation without a driver or without driver input). Rather, an autonomous vehicle includes those that can operate under driver control during certain time periods and without driver control during other time periods. Autonomous vehicles may also include vehicles that control only some aspects of vehicle navigation, such as steering (e.g., to maintain a vehicle course between vehicle lane constraints), but may leave other aspects to the driver (e.g., braking). In some cases, autonomous vehicles may handle some or all aspects of braking, speed control, and/or steering of the vehicle. As human drivers typically rely on visual cues and observations to control a vehicle, transportation infrastructures are built accordingly, with lane markings, traffic signs, and traffic lights are all designed to provide visual information to drivers. In view of these design characteristics of transportation infrastructures, an autonomous vehicle may include a camera and a processing unit that analyzes visual information captured from the environment of the vehicle. The visual information may include, for example, components of the transportation infrastructure (e.g., lane markings, traffic signs, traffic lights, etc.) that are observable by drivers and other obstacles (e.g., other vehicles, pedestrians, debris, etc.). Additionally, an autonomous vehicle may also use stored information, such as information that provides a model of the vehicle's environment when navigating. For example, the vehicle may use GPS data, sensor data (e.g., from an accelerometer, a speed sensor, a suspension sensor, etc.), and/or other map data to provide information related to its environment while the vehicle is traveling, and the vehicle (as well as other vehicles) may use the information to localize itself on the model. In some embodiments in this disclosure, an autonomous vehicle may use information obtained while navigating (e.g., from a camera, GPS device, an accelerometer, a speed sensor, a suspension sensor, etc.). In other embodiments, an autonomous vehicle may use information obtained from past navigations by the vehicle (or by other vehicles) while navigating. In yet other embodiments, an autonomous vehicle may use a combination of information obtained while navigating and information obtained from past navigations. The following sections provide an overview of a system consistent with the disclosed embodiments, followed by an overview of a forward-facing imaging system and methods consistent with the system. The sections that follow disclose systems and methods for constructing, using, and updating a sparse map for autonomous vehicle navigation. System Overview FIG.1is a block diagram representation of a system100consistent with the exemplary disclosed embodiments. System100may include various components depending on the requirements of a particular implementation. In some embodiments, system100may include a processing unit110, an image acquisition unit120, a position sensor130, one or more memory units140,150, a map database160, a user interface170, and a wireless transceiver172. Processing unit110may include one or more processing devices. In some embodiments, processing unit110may include an applications processor180, an image processor190, or any other suitable processing device. Similarly, image acquisition unit120may include any number of image acquisition devices and components depending on the requirements of a particular application. In some embodiments, image acquisition unit120may include one or more image capture devices (e.g., cameras), such as image capture device122, image capture device124, and image capture device126. System100may also include a data interface128communicatively connecting processing device110to image acquisition device120. For example, data interface128may include any wired and/or wireless link or links for transmitting image data acquired by image accusation device120to processing unit110. Wireless transceiver172may include one or more devices configured to exchange transmissions over an air interface to one or more networks (e.g., cellular, the Internet, etc.) by use of a radio frequency, infrared frequency, magnetic field, or an electric field. Wireless transceiver172may use any known standard to transmit and/or receive data (e.g., Wi-Fi, Bluetooth®, Bluetooth Smart, 802.15.4, ZigBee, etc.). Such transmissions can include communications from the host vehicle to one or more remotely located servers. Such transmissions may also include communications (one-way or two-way) between the host vehicle and one or more target vehicles in an environment of the host vehicle (e.g., to facilitate coordination of navigation of the host vehicle in view of or together with target vehicles in the environment of the host vehicle), or even a broadcast transmission to unspecified recipients in a vicinity of the transmitting vehicle. Both applications processor180and image processor190may include various types of processing devices. For example, either or both of applications processor180and image processor190may include a microprocessor, preprocessors (such as an image preprocessor), a graphics processing unit (GPU), a central processing unit (CPU), support circuits, digital signal processors, integrated circuits, memory, or any other types of devices suitable for running applications and for image processing and analysis. In some embodiments, applications processor180and/or image processor190may include any type of single or multi-core processor, mobile device microcontroller, central processing unit, etc. Various processing devices may be used, including, for example, processors available from manufacturers such as Intel®, AMD®, etc., or GPUs available from manufacturers such as NVIDIA®, ATI®, etc. and may include various architectures (e.g., x86 processor, ARM®, etc.). In some embodiments, applications processor180and/or image processor190may include any of the EyeQ series of processor chips available from Mobileye®. These processor designs each include multiple processing units with local memory and instruction sets. Such processors may include video inputs for receiving image data from multiple image sensors and may also include video out capabilities. In one example, the EyeQ2® uses 90 nm-micron technology operating at 332 Mhz. The EyeQ2® architecture consists of two floating point, hyper-thread 32-bit RISC CPUs (MIPS320 34K® cores), five Vision Computing Engines (VCE), three Vector Microcode Processors (VMP®), Denali 64-bit Mobile DDR Controller, 128-bit internal Sonics Interconnect, dual 16-bit Video input and 18-bit Video output controllers, 16 channels DMA and several peripherals. The MIPS34K CPU manages the five VCEs, three VMP™ and the DMA, the second MIPS34K CPU and the multi-channel DMA as well as the other peripherals. The five VCEs, three VMP® and the MIPS34K CPU can perform intensive vision computations required by multi-function bundle applications. In another example, the EyeQ3®, which is a third generation processor and is six times more powerful that the EyeQ2®, may be used in the disclosed embodiments. In other examples, the EyeQ4® and/or the EyeQ5® may be used in the disclosed embodiments. Of course, any newer or future EyeQ processing devices may also be used together with the disclosed embodiments. Any of the processing devices disclosed herein may be configured to perform certain functions. Configuring a processing device, such as any of the described EyeQ processors or other controller or microprocessor, to perform certain functions may include programming of computer executable instructions and making those instructions available to the processing device for execution during operation of the processing device. In some embodiments, configuring a processing device may include programming the processing device directly with architectural instructions. For example, processing devices such as field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and the like may be configured using, for example, one or more hardware description languages (HDLs). In other embodiments, configuring a processing device may include storing executable instructions on a memory that is accessible to the processing device during operation. For example, the processing device may access the memory to obtain and execute the stored instructions during operation. In either case, the processing device configured to perform the sensing, image analysis, and/or navigational functions disclosed herein represents a specialized hardware-based system in control of multiple hardware based components of a host vehicle. WhileFIG.1depicts two separate processing devices included in processing unit110, more or fewer processing devices may be used. For example, in some embodiments, a single processing device may be used to accomplish the tasks of applications processor180and image processor190. In other embodiments, these tasks may be performed by more than two processing devices. Further, in some embodiments, system100may include one or more of processing unit110without including other components, such as image acquisition unit120. Processing unit110may comprise various types of devices. For example, processing unit110may include various devices, such as a controller, an image preprocessor, a central processing unit (CPU), a graphics processing unit (GPU), support circuits, digital signal processors, integrated circuits, memory, or any other types of devices for image processing and analysis. The image preprocessor may include a video processor for capturing, digitizing and processing the imagery from the image sensors. The CPU may comprise any number of microcontrollers or microprocessors. The GPU may also comprise any number of microcontrollers or microprocessors. The support circuits may be any number of circuits generally well known in the art, including cache, power supply, clock and input-output circuits. The memory may store software that, when executed by the processor, controls the operation of the system. The memory may include databases and image processing software. The memory may comprise any number of random access memories, read only memories, flash memories, disk drives, optical storage, tape storage, removable storage and other types of storage. In one instance, the memory may be separate from the processing unit110. In another instance, the memory may be integrated into the processing unit110. Each memory140,150may include software instructions that when executed by a processor (e.g., applications processor180and/or image processor190), may control operation of various aspects of system100. These memory units may include various databases and image processing software, as well as a trained system, such as a neural network, or a deep neural network, for example. The memory units may include random access memory (RAM), read only memory (ROM), flash memory, disk drives, optical storage, tape storage, removable storage and/or any other types of storage. In some embodiments, memory units140,150may be separate from the applications processor180and/or image processor190. In other embodiments, these memory units may be integrated into applications processor180and/or image processor190. Position sensor130may include any type of device suitable for determining a location associated with at least one component of system100. In some embodiments, position sensor130may include a GPS receiver. Such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites. Position information from position sensor130may be made available to applications processor180and/or image processor190. In some embodiments, system100may include components such as a speed sensor (e.g., a tachometer, a speedometer) for measuring a speed of vehicle200and/or an accelerometer (either single axis or multiaxis) for measuring acceleration of vehicle200. User interface170may include any device suitable for providing information to or for receiving inputs from one or more users of system100. In some embodiments, user interface170may include user input devices, including, for example, a touchscreen, microphone, keyboard, pointer devices, track wheels, cameras, knobs, buttons, etc. With such input devices, a user may be able to provide information inputs or commands to system100by typing instructions or information, providing voice commands, selecting menu options on a screen using buttons, pointers, or eye-tracking capabilities, or through any other suitable techniques for communicating information to system100. User interface170may be equipped with one or more processing devices configured to provide and receive information to or from a user and process that information for use by, for example, applications processor180. In some embodiments, such processing devices may execute instructions for recognizing and tracking eye movements, receiving and interpreting voice commands, recognizing and interpreting touches and/or gestures made on a touchscreen, responding to keyboard entries or menu selections, etc. In some embodiments, user interface170may include a display, speaker, tactile device, and/or any other devices for providing output information to a user. Map database160may include any type of database for storing map data useful to system100. In some embodiments, map database160may include data relating to the position, in a reference coordinate system, of various items, including roads, water features, geographic features, businesses, points of interest, restaurants, gas stations, etc. Map database160may store not only the locations of such items, but also descriptors relating to those items, including, for example, names associated with any of the stored features. In some embodiments, map database160may be physically located with other components of system100. Alternatively or additionally, map database160or a portion thereof may be located remotely with respect to other components of system100(e.g., processing unit110). In such embodiments, information from map database160may be downloaded over a wired or wireless data connection to a network (e.g., over a cellular network and/or the Internet, etc.). In some cases, map database160may store a sparse data model including polynomial representations of certain road features (e.g., lane markings) or target trajectories for the host vehicle. Systems and methods of generating such a map are discussed below with references toFIGS.8-19. Image capture devices122,124, and126may each include any type of device suitable for capturing at least one image from an environment. Moreover, any number of image capture devices may be used to acquire images for input to the image processor. Some embodiments may include only a single image capture device, while other embodiments may include two, three, or even four or more image capture devices. Image capture devices122,124, and126will be further described with reference toFIGS.2B-2E, below. System100, or various components thereof, may be incorporated into various different platforms. In some embodiments, system100may be included on a vehicle200, as shown inFIG.2A. For example, vehicle200may be equipped with a processing unit110and any of the other components of system100, as described above relative toFIG.1. While in some embodiments vehicle200may be equipped with only a single image capture device (e.g., camera), in other embodiments, such as those discussed in connection withFIGS.2B-2E, multiple image capture devices may be used. For example, either of image capture devices122and124of vehicle200, as shown inFIG.2A, may be part of an ADAS (Advanced Driver Assistance Systems) imaging set. The image capture devices included on vehicle200as part of the image acquisition unit120may be positioned at any suitable location. In some embodiments, as shown inFIGS.2A-2E and3A-3C, image capture device122may be located in the vicinity of the rearview mirror. This position may provide a line of sight similar to that of the driver of vehicle200, which may aid in determining what is and is not visible to the driver. Image capture device122may be positioned at any location near the rearview mirror, but placing image capture device122on the driver side of the mirror may further aid in obtaining images representative of the driver's field of view and/or line of sight. Other locations for the image capture devices of image acquisition unit120may also be used. For example, image capture device124may be located on or in a bumper of vehicle200. Such a location may be especially suitable for image capture devices having a wide field of view. The line of sight of bumper-located image capture devices can be different from that of the driver and, therefore, the bumper image capture device and driver may not always see the same objects. The image capture devices (e.g., image capture devices122,124, and126) may also be located in other locations. For example, the image capture devices may be located on or in one or both of the side mirrors of vehicle200, on the roof of vehicle200, on the hood of vehicle200, on the trunk of vehicle200, on the sides of vehicle200, mounted on, positioned behind, or positioned in front of any of the windows of vehicle200, and mounted in or near light figures on the front and/or back of vehicle200, etc. In addition to image capture devices, vehicle200may include various other components of system100. For example, processing unit110may be included on vehicle200either integrated with or separate from an engine control unit (ECU) of the vehicle. Vehicle200may also be equipped with a position sensor130, such as a GPS receiver and may also include a map database160and memory units140and150. As discussed earlier, wireless transceiver172may and/or receive data over one or more networks (e.g., cellular networks, the Internet, etc.). For example, wireless transceiver172may upload data collected by system100to one or more servers, and download data from the one or more servers. Via wireless transceiver172, system100may receive, for example, periodic or on demand updates to data stored in map database160, memory140, and/or memory150. Similarly, wireless transceiver172may upload any data (e.g., images captured by image acquisition unit120, data received by position sensor130or other sensors, vehicle control systems, etc.) from by system100and/or any data processed by processing unit110to the one or more servers. System100may upload data to a server (e.g., to the cloud) based on a privacy level setting. For example, system100may implement privacy level settings to regulate or limit the types of data (including metadata) sent to the server that may uniquely identify a vehicle and or driver/owner of a vehicle. Such settings may be set by user via, for example, wireless transceiver172, be initialized by factory default settings, or by data received by wireless transceiver172. In some embodiments, system100may upload data according to a “high” privacy level, and under setting a setting, system100may transmit data (e.g., location information related to a route, captured images, etc.) without any details about the specific vehicle and/or driver/owner. For example, when uploading data according to a “high” privacy setting, system100may not include a vehicle identification number (VIN) or a name of a driver or owner of the vehicle, and may instead of transmit data, such as captured images and/or limited location information related to a route. Other privacy levels are contemplated. For example, system100may transmit data to a server according to an “intermediate” privacy level and include additional information not included under a “high” privacy level, such as a make and/or model of a vehicle and/or a vehicle type (e.g., a passenger vehicle, sport utility vehicle, truck, etc.). In some embodiments, system100may upload data according to a “low” privacy level. Under a “low” privacy level setting, system100may upload data and include information sufficient to uniquely identify a specific vehicle, owner/driver, and/or a portion or entirely of a route traveled by the vehicle. Such “low” privacy level data may include one or more of, for example, a VIN, a driver/owner name, an origination point of a vehicle prior to departure, an intended destination of the vehicle, a make and/or model of the vehicle, a type of the vehicle, etc. FIG.2Ais a diagrammatic side view representation of an exemplary vehicle imaging system consistent with the disclosed embodiments.FIG.2Bis a diagrammatic top view illustration of the embodiment shown inFIG.2A. As illustrated inFIG.2B, the disclosed embodiments may include a vehicle200including in its body a system100with a first image capture device122positioned in the vicinity of the rearview mirror and/or near the driver of vehicle200, a second image capture device124positioned on or in a bumper region (e.g., one of bumper regions210) of vehicle200, and a processing unit110. As illustrated inFIG.2C, image capture devices122and124may both be positioned in the vicinity of the rearview mirror and/or near the driver of vehicle200. Additionally, while two image capture devices122and124are shown inFIGS.2B and2C, it should be understood that other embodiments may include more than two image capture devices. For example, in the embodiments shown inFIGS.2D and2E, first, second, and third image capture devices122,124, and126, are included in the system100of vehicle200. As illustrated inFIG.2D, image capture device122may be positioned in the vicinity of the rearview mirror and/or near the driver of vehicle200, and image capture devices124and126may be positioned on or in a bumper region (e.g., one of bumper regions210) of vehicle200. And as shown inFIG.2E, image capture devices122,124, and126may be positioned in the vicinity of the rearview mirror and/or near the driver seat of vehicle200. The disclosed embodiments are not limited to any particular number and configuration of the image capture devices, and the image capture devices may be positioned in any appropriate location within and/or on vehicle200. It is to be understood that the disclosed embodiments are not limited to vehicles and could be applied in other contexts. It is also to be understood that disclosed embodiments are not limited to a particular type of vehicle200and may be applicable to all types of vehicles including automobiles, trucks, trailers, and other types of vehicles. The first image capture device122may include any suitable type of image capture device. Image capture device122may include an optical axis. In one instance, the image capture device122may include an Aptina M9V024 WVGA sensor with a global shutter. In other embodiments, image capture device122may provide a resolution of 1280×960 pixels and may include a rolling shutter. Image capture device122may include various optical elements. In some embodiments one or more lenses may be included, for example, to provide a desired focal length and field of view for the image capture device. In some embodiments, image capture device122may be associated with a 6 mm lens or a 12 mm lens. In some embodiments, image capture device122may be configured to capture images having a desired field-of-view (FOV)202, as illustrated inFIG.2D. For example, image capture device122may be configured to have a regular FOV, such as within a range of 40 degrees to 56 degrees, including a 46 degree FOV, 50 degree FOV, 52 degree FOV, or greater. Alternatively, image capture device122may be configured to have a narrow FOV in the range of 23 to 40 degrees, such as a 28 degree FOV or 36 degree FOV. In addition, image capture device122may be configured to have a wide FOV in the range of 100 to 180 degrees. In some embodiments, image capture device122may include a wide angle bumper camera or one with up to a 180 degree FOV. In some embodiments, image capture device122may be a 7.2 M pixel image capture device with an aspect ratio of about 2:1 (e.g., HxV=3800×1900 pixels) with about 100 degree horizontal FOV. Such an image capture device may be used in place of a three image capture device configuration. Due to significant lens distortion, the vertical FOV of such an image capture device may be significantly less than 50 degrees in implementations in which the image capture device uses a radially symmetric lens. For example, such a lens may not be radially symmetric which would allow for a vertical FOV greater than 50 degrees with 100 degree horizontal FOV. The first image capture device122may acquire a plurality of first images relative to a scene associated with the vehicle200. Each of the plurality of first images may be acquired as a series of image scan lines, which may be captured using a rolling shutter. Each scan line may include a plurality of pixels. The first image capture device122may have a scan rate associated with acquisition of each of the first series of image scan lines. The scan rate may refer to a rate at which an image sensor can acquire image data associated with each pixel included in a particular scan line. Image capture devices122,124, and126may contain any suitable type and number of image sensors, including CCD sensors or CMOS sensors, for example. In one embodiment, a CMOS image sensor may be employed along with a rolling shutter, such that each pixel in a row is read one at a time, and scanning of the rows proceeds on a row-by-row basis until an entire image frame has been captured. In some embodiments, the rows may be captured sequentially from top to bottom relative to the frame. In some embodiments, one or more of the image capture devices (e.g., image capture devices122,124, and126) disclosed herein may constitute a high resolution imager and may have a resolution greater than 5 M pixel, 7 M pixel, 10 M pixel, or greater. The use of a rolling shutter may result in pixels in different rows being exposed and captured at different times, which may cause skew and other image artifacts in the captured image frame. On the other hand, when the image capture device122is configured to operate with a global or synchronous shutter, all of the pixels may be exposed for the same amount of time and during a common exposure period. As a result, the image data in a frame collected from a system employing a global shutter represents a snapshot of the entire FOV (such as FOV202) at a particular time. In contrast, in a rolling shutter application, each row in a frame is exposed and data is capture at different times. Thus, moving objects may appear distorted in an image capture device having a rolling shutter. This phenomenon will be described in greater detail below. The second image capture device124and the third image capturing device126may be any type of image capture device. Like the first image capture device122, each of image capture devices124and126may include an optical axis. In one embodiment, each of image capture devices124and126may include an Aptina M9V024 WVGA sensor with a global shutter. Alternatively, each of image capture devices124and126may include a rolling shutter. Like image capture device122, image capture devices124and126may be configured to include various lenses and optical elements. In some embodiments, lenses associated with image capture devices124and126may provide FOVs (such as FOVs204and206) that are the same as, or narrower than, a FOV (such as FOV202) associated with image capture device122. For example, image capture devices124and126may have FOVs of 40 degrees, 30 degrees, 26 degrees, 23 degrees, 20 degrees, or less. Image capture devices124and126may acquire a plurality of second and third images relative to a scene associated with the vehicle200. Each of the plurality of second and third images may be acquired as a second and third series of image scan lines, which may be captured using a rolling shutter. Each scan line or row may have a plurality of pixels. Image capture devices124and126may have second and third scan rates associated with acquisition of each of image scan lines included in the second and third series. Each image capture device122,124, and126may be positioned at any suitable position and orientation relative to vehicle200. The relative positioning of the image capture devices122,124, and126may be selected to aid in fusing together the information acquired from the image capture devices. For example, in some embodiments, a FOV (such as FOV204) associated with image capture device124may overlap partially or fully with a FOV (such as FOV202) associated with image capture device122and a FOV (such as FOV206) associated with image capture device126. Image capture devices122,124, and126may be located on vehicle200at any suitable relative heights. In one instance, there may be a height difference between the image capture devices122,124, and126, which may provide sufficient parallax information to enable stereo analysis. For example, as shown inFIG.2A, the two image capture devices122and124are at different heights. There may also be a lateral displacement difference between image capture devices122,124, and126, giving additional parallax information for stereo analysis by processing unit110, for example. The difference in the lateral displacement may be denoted by dX, as shown inFIGS.2C and2D. In some embodiments, fore or aft displacement (e.g., range displacement) may exist between image capture devices122,124, and126. For example, image capture device122may be located 0.5 to 2 meters or more behind image capture device124and/or image capture device126. This type of displacement may enable one of the image capture devices to cover potential blind spots of the other image capture device(s). Image capture devices122may have any suitable resolution capability (e.g., number of pixels associated with the image sensor), and the resolution of the image sensor(s) associated with the image capture device122may be higher, lower, or the same as the resolution of the image sensor(s) associated with image capture devices124and126. In some embodiments, the image sensor(s) associated with image capture device122and/or image capture devices124and126may have a resolution of 640×480, 1024×768, 1280×960, or any other suitable resolution. The frame rate (e.g., the rate at which an image capture device acquires a set of pixel data of one image frame before moving on to capture pixel data associated with the next image frame) may be controllable. The frame rate associated with image capture device122may be higher, lower, or the same as the frame rate associated with image capture devices124and126. The frame rate associated with image capture devices122,124, and126may depend on a variety of factors that may affect the timing of the frame rate. For example, one or more of image capture devices122,124, and126may include a selectable pixel delay period imposed before or after acquisition of image data associated with one or more pixels of an image sensor in image capture device122,124, and/or126. Generally, image data corresponding to each pixel may be acquired according to a clock rate for the device (e.g., one pixel per clock cycle). Additionally, in embodiments including a rolling shutter, one or more of image capture devices122,124, and126may include a selectable horizontal blanking period imposed before or after acquisition of image data associated with a row of pixels of an image sensor in image capture device122,124, and/or126. Further, one or more of image capture devices122,124, and/or126may include a selectable vertical blanking period imposed before or after acquisition of image data associated with an image frame of image capture device122,124, and126. These timing controls may enable synchronization of frame rates associated with image capture devices122,124, and126, even where the line scan rates of each are different. Additionally, as will be discussed in greater detail below, these selectable timing controls, among other factors (e.g., image sensor resolution, maximum line scan rates, etc.) may enable synchronization of image capture from an area where the FOV of image capture device122overlaps with one or more FOVs of image capture devices124and126, even where the field of view of image capture device122is different from the FOVs of image capture devices124and126. Frame rate timing in image capture device122,124, and126may depend on the resolution of the associated image sensors. For example, assuming similar line scan rates for both devices, if one device includes an image sensor having a resolution of 640×480 and another device includes an image sensor with a resolution of 1280×960, then more time will be required to acquire a frame of image data from the sensor having the higher resolution. Another factor that may affect the timing of image data acquisition in image capture devices122,124, and126is the maximum line scan rate. For example, acquisition of a row of image data from an image sensor included in image capture device122,124, and126will require some minimum amount of time. Assuming no pixel delay periods are added, this minimum amount of time for acquisition of a row of image data will be related to the maximum line scan rate for a particular device. Devices that offer higher maximum line scan rates have the potential to provide higher frame rates than devices with lower maximum line scan rates. In some embodiments, one or more of image capture devices124and126may have a maximum line scan rate that is higher than a maximum line scan rate associated with image capture device122. In some embodiments, the maximum line scan rate of image capture device124and/or126may be 1.25, 1.5, 1.75, or 2 times or more than a maximum line scan rate of image capture device122. In another embodiment, image capture devices122,124, and126may have the same maximum line scan rate, but image capture device122may be operated at a scan rate less than or equal to its maximum scan rate. The system may be configured such that one or more of image capture devices124and126operate at a line scan rate that is equal to the line scan rate of image capture device122. In other instances, the system may be configured such that the line scan rate of image capture device124and/or image capture device126may be 1.25, 1.5, 1.75, or 2 times or more than the line scan rate of image capture device122. In some embodiments, image capture devices122,124, and126may be asymmetric. That is, they may include cameras having different fields of view (FOV) and focal lengths. The fields of view of image capture devices122,124, and126may include any desired area relative to an environment of vehicle200, for example. In some embodiments, one or more of image capture devices122,124, and126may be configured to acquire image data from an environment in front of vehicle200, behind vehicle200, to the sides of vehicle200, or combinations thereof. Further, the focal length associated with each image capture device122,124, and/or126may be selectable (e.g., by inclusion of appropriate lenses etc.) such that each device acquires images of objects at a desired distance range relative to vehicle200. For example, in some embodiments image capture devices122,124, and126may acquire images of close-up objects within a few meters from the vehicle. Image capture devices122,124, and126may also be configured to acquire images of objects at ranges more distant from the vehicle (e.g., 25 m, 50 m, 100 m, 150 m, or more). Further, the focal lengths of image capture devices122,124, and126may be selected such that one image capture device (e.g., image capture device122) can acquire images of objects relatively close to the vehicle (e.g., within 10 m or within 20 m) while the other image capture devices (e.g., image capture devices124and126) can acquire images of more distant objects (e.g., greater than 20 m, 50 m, 100 m, 150 m, etc.) from vehicle200. According to some embodiments, the FOV of one or more image capture devices122,124, and126may have a wide angle. For example, it may be advantageous to have a FOV of 140 degrees, especially for image capture devices122,124, and126that may be used to capture images of the area in the vicinity of vehicle200. For example, image capture device122may be used to capture images of the area to the right or left of vehicle200and, in such embodiments, it may be desirable for image capture device122to have a wide FOV (e.g., at least 140 degrees). The field of view associated with each of image capture devices122,124, and126may depend on the respective focal lengths. For example, as the focal length increases, the corresponding field of view decreases. Image capture devices122,124, and126may be configured to have any suitable fields of view. In one particular example, image capture device122may have a horizontal FOV of 46 degrees, image capture device124may have a horizontal FOV of 23 degrees, and image capture device126may have a horizontal FOV in between 23 and 46 degrees. In another instance, image capture device122may have a horizontal FOV of 52 degrees, image capture device124may have a horizontal FOV of 26 degrees, and image capture device126may have a horizontal FOV in between 26 and 52 degrees. In some embodiments, a ratio of the FOV of image capture device122to the FOVs of image capture device124and/or image capture device126may vary from 1.5 to 2.0. In other embodiments, this ratio may vary between 1.25 and 2.25. System100may be configured so that a field of view of image capture device122overlaps, at least partially or fully, with a field of view of image capture device124and/or image capture device126. In some embodiments, system100may be configured such that the fields of view of image capture devices124and126, for example, fall within (e.g., are narrower than) and share a common center with the field of view of image capture device122. In other embodiments, the image capture devices122,124, and126may capture adjacent FOVs or may have partial overlap in their FOVs. In some embodiments, the fields of view of image capture devices122,124, and126may be aligned such that a center of the narrower FOV image capture devices124and/or126may be located in a lower half of the field of view of the wider FOV device122. FIG.2Fis a diagrammatic representation of exemplary vehicle control systems, consistent with the disclosed embodiments. As indicated inFIG.2F, vehicle200may include throttling system220, braking system230, and steering system240. System100may provide inputs (e.g., control signals) to one or more of throttling system220, braking system230, and steering system240over one or more data links (e.g., any wired and/or wireless link or links for transmitting data). For example, based on analysis of images acquired by image capture devices122,124, and/or126, system100may provide control signals to one or more of throttling system220, braking system230, and steering system240to navigate vehicle200(e.g., by causing an acceleration, a turn, a lane shift, etc.). Further, system100may receive inputs from one or more of throttling system220, braking system230, and steering system24indicating operating conditions of vehicle200(e.g., speed, whether vehicle200is braking and/or turning, etc.). Further details are provided in connection withFIGS.4-7, below. As shown inFIG.3A, vehicle200may also include a user interface170for interacting with a driver or a passenger of vehicle200. For example, user interface170in a vehicle application may include a touch screen320, knobs330, buttons340, and a microphone350. A driver or passenger of vehicle200may also use handles (e.g., located on or near the steering column of vehicle200including, for example, turn signal handles), buttons (e.g., located on the steering wheel of vehicle200), and the like, to interact with system100. In some embodiments, microphone350may be positioned adjacent to a rearview mirror310. Similarly, in some embodiments, image capture device122may be located near rearview mirror310. In some embodiments, user interface170may also include one or more speakers360(e.g., speakers of a vehicle audio system). For example, system100may provide various notifications (e.g., alerts) via speakers360. FIGS.3B-3Dare illustrations of an exemplary camera mount370configured to be positioned behind a rearview mirror (e.g., rearview mirror310) and against a vehicle windshield, consistent with disclosed embodiments. As shown inFIG.3B, camera mount370may include image capture devices122,124, and126. Image capture devices124and126may be positioned behind a glare shield380, which may be flush against the vehicle windshield and include a composition of film and/or anti-reflective materials. For example, glare shield380may be positioned such that the shield aligns against a vehicle windshield having a matching slope. In some embodiments, each of image capture devices122,124, and126may be positioned behind glare shield380, as depicted, for example, inFIG.3D. The disclosed embodiments are not limited to any particular configuration of image capture devices122,124, and126, camera mount370, and glare shield380.FIG.3Cis an illustration of camera mount370shown inFIG.3Bfrom a front perspective. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the foregoing disclosed embodiments. For example, not all components are essential for the operation of system100. Further, any component may be located in any appropriate part of system100and the components may be rearranged into a variety of configurations while providing the functionality of the disclosed embodiments. Therefore, the foregoing configurations are examples and, regardless of the configurations discussed above, system100can provide a wide range of functionality to analyze the surroundings of vehicle200and navigate vehicle200in response to the analysis. As discussed below in further detail and consistent with various disclosed embodiments, system100may provide a variety of features related to autonomous driving and/or driver assist technology. For example, system100may analyze image data, position data (e.g., GPS location information), map data, speed data, and/or data from sensors included in vehicle200. System100may collect the data for analysis from, for example, image acquisition unit120, position sensor130, and other sensors. Further, system100may analyze the collected data to determine whether or not vehicle200should take a certain action, and then automatically take the determined action without human intervention. For example, when vehicle200navigates without human intervention, system100may automatically control the braking, acceleration, and/or steering of vehicle200(e.g., by sending control signals to one or more of throttling system220, braking system230, and steering system240). Further, system100may analyze the collected data and issue warnings and/or alerts to vehicle occupants based on the analysis of the collected data. Additional details regarding the various embodiments that are provided by system100are provided below. Forward-Facing Multi-Imaging System As discussed above, system100may provide drive assist functionality that uses a multi-camera system. The multi-camera system may use one or more cameras facing in the forward direction of a vehicle. In other embodiments, the multi-camera system may include one or more cameras facing to the side of a vehicle or to the rear of the vehicle. In one embodiment, for example, system100may use a two-camera imaging system, where a first camera and a second camera (e.g., image capture devices122and124) may be positioned at the front and/or the sides of a vehicle (e.g., vehicle200). The first camera may have a field of view that is greater than, less than, or partially overlapping with, the field of view of the second camera. In addition, the first camera may be connected to a first image processor to perform monocular image analysis of images provided by the first camera, and the second camera may be connected to a second image processor to perform monocular image analysis of images provided by the second camera. The outputs (e.g., processed information) of the first and second image processors may be combined. In some embodiments, the second image processor may receive images from both the first camera and second camera to perform stereo analysis. In another embodiment, system100may use a three-camera imaging system where each of the cameras has a different field of view. Such a system may, therefore, make decisions based on information derived from objects located at varying distances both forward and to the sides of the vehicle. References to monocular image analysis may refer to instances where image analysis is performed based on images captured from a single point of view (e.g., from a single camera). Stereo image analysis may refer to instances where image analysis is performed based on two or more images captured with one or more variations of an image capture parameter. For example, captured images suitable for performing stereo image analysis may include images captured: from two or more different positions, from different fields of view, using different focal lengths, along with parallax information, etc. For example, in one embodiment, system100may implement a three camera configuration using image capture devices122,124, and126. In such a configuration, image capture device122may provide a narrow field of view (e.g., 34 degrees, or other values selected from a range of about 20 to 45 degrees, etc.), image capture device124may provide a wide field of view (e.g., 150 degrees or other values selected from a range of about 100 to about 180 degrees), and image capture device126may provide an intermediate field of view (e.g., 46 degrees or other values selected from a range of about 35 to about 60 degrees). In some embodiments, image capture device126may act as a main or primary camera. Image capture devices122,124, and126may be positioned behind rearview mirror310and positioned substantially side-by-side (e.g., 6 cm apart). Further, in some embodiments, as discussed above, one or more of image capture devices122,124, and126may be mounted behind glare shield380that is flush with the windshield of vehicle200. Such shielding may act to minimize the impact of any reflections from inside the car on image capture devices122,124, and126. In another embodiment, as discussed above in connection withFIGS.3B and3C, the wide field of view camera (e.g., image capture device124in the above example) may be mounted lower than the narrow and main field of view cameras (e.g., image devices122and126in the above example). This configuration may provide a free line of sight from the wide field of view camera. To reduce reflections, the cameras may be mounted close to the windshield of vehicle200, and may include polarizers on the cameras to damp reflected light. A three camera system may provide certain performance characteristics. For example, some embodiments may include an ability to validate the detection of objects by one camera based on detection results from another camera. In the three camera configuration discussed above, processing unit110may include, for example, three processing devices (e.g., three EyeQ series of processor chips, as discussed above), with each processing device dedicated to processing images captured by one or more of image capture devices122,124, and126. In a three camera system, a first processing device may receive images from both the main camera and the narrow field of view camera, and perform vision processing of the narrow FOV camera to, for example, detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. Further, the first processing device may calculate a disparity of pixels between the images from the main camera and the narrow camera and create a 3D reconstruction of the environment of vehicle200. The first processing device may then combine the 3D reconstruction with 3D map data or with 3D information calculated based on information from another camera. The second processing device may receive images from main camera and perform vision processing to detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. Additionally, the second processing device may calculate a camera displacement and, based on the displacement, calculate a disparity of pixels between successive images and create a 3D reconstruction of the scene (e.g., a structure from motion). The second processing device may send the structure from motion based 3D reconstruction to the first processing device to be combined with the stereo 3D images. The third processing device may receive images from the wide FOV camera and process the images to detect vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. The third processing device may further execute additional processing instructions to analyze images to identify objects moving in the image, such as vehicles changing lanes, pedestrians, etc. In some embodiments, having streams of image-based information captured and processed independently may provide an opportunity for providing redundancy in the system. Such redundancy may include, for example, using a first image capture device and the images processed from that device to validate and/or supplement information obtained by capturing and processing image information from at least a second image capture device. In some embodiments, system100may use two image capture devices (e.g., image capture devices122and124) in providing navigation assistance for vehicle200and use a third image capture device (e.g., image capture device126) to provide redundancy and validate the analysis of data received from the other two image capture devices. For example, in such a configuration, image capture devices122and124may provide images for stereo analysis by system100for navigating vehicle200, while image capture device126may provide images for monocular analysis by system100to provide redundancy and validation of information obtained based on images captured from image capture device122and/or image capture device124. That is, image capture device126(and a corresponding processing device) may be considered to provide a redundant sub-system for providing a check on the analysis derived from image capture devices122and124(e.g., to provide an automatic emergency braking (AEB) system). Furthermore, in some embodiments, redundancy and validation of received data may be supplemented based on information received from one more sensors (e.g., radar, lidar, acoustic sensors, information received from one or more transceivers outside of a vehicle, etc.). One of skill in the art will recognize that the above camera configurations, camera placements, number of cameras, camera locations, etc., are examples only. These components and others described relative to the overall system may be assembled and used in a variety of different configurations without departing from the scope of the disclosed embodiments. Further details regarding usage of a multi-camera system to provide driver assist and/or autonomous vehicle functionality follow below. FIG.4is an exemplary functional block diagram of memory140and/or150, which may be stored/programmed with instructions for performing one or more operations consistent with the disclosed embodiments. Although the following refers to memory140, one of skill in the art will recognize that instructions may be stored in memory140and/or150. As shown inFIG.4, memory140may store a monocular image analysis module402, a stereo image analysis module404, a velocity and acceleration module406, and a navigational response module408. The disclosed embodiments are not limited to any particular configuration of memory140. Further, application processor180and/or image processor190may execute the instructions stored in any of modules402,404,406, and408included in memory140. One of skill in the art will understand that references in the following discussions to processing unit110may refer to application processor180and image processor190individually or collectively. Accordingly, steps of any of the following processes may be performed by one or more processing devices. In one embodiment, monocular image analysis module402may store instructions (such as computer vision software) which, when executed by processing unit110, performs monocular image analysis of a set of images acquired by one of image capture devices122,124, and126. In some embodiments, processing unit110may combine information from a set of images with additional sensory information (e.g., information from radar, lidar, etc.) to perform the monocular image analysis. As described in connection withFIGS.5A-5Dbelow, monocular image analysis module402may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle. Based on the analysis, system100(e.g., via processing unit110) may cause one or more navigational responses in vehicle200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module408. In one embodiment, stereo image analysis module404may store instructions (such as computer vision software) which, when executed by processing unit110, performs stereo image analysis of first and second sets of images acquired by a combination of image capture devices selected from any of image capture devices122,124, and126. In some embodiments, processing unit110may combine information from the first and second sets of images with additional sensory information (e.g., information from radar) to perform the stereo image analysis. For example, stereo image analysis module404may include instructions for performing stereo image analysis based on a first set of images acquired by image capture device124and a second set of images acquired by image capture device126. As described in connection withFIG.6below, stereo image analysis module404may include instructions for detecting a set of features within the first and second sets of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and the like. Based on the analysis, processing unit110may cause one or more navigational responses in vehicle200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module408. Furthermore, in some embodiments, stereo image analysis module404may implement techniques associated with a trained system (such as a neural network or a deep neural network) or an untrained system, such as a system that may be configured to use computer vision algorithms to detect and/or label objects in an environment from which sensory information was captured and processed. In one embodiment, stereo image analysis module404and/or other image processing modules may be configured to use a combination of a trained and untrained system. In one embodiment, velocity and acceleration module406may store software configured to analyze data received from one or more computing and electromechanical devices in vehicle200that are configured to cause a change in velocity and/or acceleration of vehicle200. For example, processing unit110may execute instructions associated with velocity and acceleration module406to calculate a target speed for vehicle200based on data derived from execution of monocular image analysis module402and/or stereo image analysis module404. Such data may include, for example, a target position, velocity, and/or acceleration, the position and/or speed of vehicle200relative to a nearby vehicle, pedestrian, or road object, position information for vehicle200relative to lane markings of the road, and the like. In addition, processing unit110may calculate a target speed for vehicle200based on sensory input (e.g., information from radar) and input from other systems of vehicle200, such as throttling system220, braking system230, and/or steering system240of vehicle200. Based on the calculated target speed, processing unit110may transmit electronic signals to throttling system220, braking system230, and/or steering system240of vehicle200to trigger a change in velocity and/or acceleration by, for example, physically depressing the brake or easing up off the accelerator of vehicle200. In one embodiment, navigational response module408may store software executable by processing unit110to determine a desired navigational response based on data derived from execution of monocular image analysis module402and/or stereo image analysis module404. Such data may include position and speed information associated with nearby vehicles, pedestrians, and road objects, target position information for vehicle200, and the like. Additionally, in some embodiments, the navigational response may be based (partially or fully) on map data, a predetermined position of vehicle200, and/or a relative velocity or a relative acceleration between vehicle200and one or more objects detected from execution of monocular image analysis module402and/or stereo image analysis module404. Navigational response module408may also determine a desired navigational response based on sensory input (e.g., information from radar) and inputs from other systems of vehicle200, such as throttling system220, braking system230, and steering system240of vehicle200. Based on the desired navigational response, processing unit110may transmit electronic signals to throttling system220, braking system230, and steering system240of vehicle200to trigger a desired navigational response by, for example, turning the steering wheel of vehicle200to achieve a rotation of a predetermined angle. In some embodiments, processing unit110may use the output of navigational response module408(e.g., the desired navigational response) as an input to execution of velocity and acceleration module406for calculating a change in speed of vehicle200. Furthermore, any of the modules (e.g., modules402,404, and406) disclosed herein may implement techniques associated with a trained system (such as a neural network or a deep neural network) or an untrained system. FIG.5Ais a flowchart showing an exemplary process500A for causing one or more navigational responses based on monocular image analysis, consistent with disclosed embodiments. At step510, processing unit110may receive a plurality of images via data interface128between processing unit110and image acquisition unit120. For instance, a camera included in image acquisition unit120(such as image capture device122having field of view202) may capture a plurality of images of an area forward of vehicle200(or to the sides or rear of a vehicle, for example) and transmit them over a data connection (e.g., digital, wired, USB, wireless, Bluetooth, etc.) to processing unit110. Processing unit110may execute monocular image analysis module402to analyze the plurality of images at step520, as described in further detail in connection withFIGS.5B-5Dbelow. By performing the analysis, processing unit110may detect a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, and the like. Processing unit110may also execute monocular image analysis module402to detect various road hazards at step520, such as, for example, parts of a truck tire, fallen road signs, loose cargo, small animals, and the like. Road hazards may vary in structure, shape, size, and color, which may make detection of such hazards more challenging. In some embodiments, processing unit110may execute monocular image analysis module402to perform multi-frame analysis on the plurality of images to detect road hazards. For example, processing unit110may estimate camera motion between consecutive image frames and calculate the disparities in pixels between the frames to construct a 3D-map of the road. Processing unit110may then use the 3D-map to detect the road surface, as well as hazards existing above the road surface. At step530, processing unit110may execute navigational response module408to cause one or more navigational responses in vehicle200based on the analysis performed at step520and the techniques as described above in connection withFIG.4. Navigational responses may include, for example, a turn, a lane shift, a change in acceleration, and the like. In some embodiments, processing unit110may use data derived from execution of velocity and acceleration module406to cause the one or more navigational responses. Additionally, multiple navigational responses may occur simultaneously, in sequence, or any combination thereof. For instance, processing unit110may cause vehicle200to shift one lane over and then accelerate by, for example, sequentially transmitting control signals to steering system240and throttling system220of vehicle200. Alternatively, processing unit110may cause vehicle200to brake while at the same time shifting lanes by, for example, simultaneously transmitting control signals to braking system230and steering system240of vehicle200. FIG.5Bis a flowchart showing an exemplary process500B for detecting one or more vehicles and/or pedestrians in a set of images, consistent with disclosed embodiments. Processing unit110may execute monocular image analysis module402to implement process500B. At step540, processing unit110may determine a set of candidate objects representing possible vehicles and/or pedestrians. For example, processing unit110may scan one or more images, compare the images to one or more predetermined patterns, and identify within each image possible locations that may contain objects of interest (e.g., vehicles, pedestrians, or portions thereof). The predetermined patterns may be designed in such a way to achieve a high rate of “false hits” and a low rate of “misses.” For example, processing unit110may use a low threshold of similarity to predetermined patterns for identifying candidate objects as possible vehicles or pedestrians. Doing so may allow processing unit110to reduce the probability of missing (e.g., not identifying) a candidate object representing a vehicle or pedestrian. At step542, processing unit110may filter the set of candidate objects to exclude certain candidates (e.g., irrelevant or less relevant objects) based on classification criteria. Such criteria may be derived from various properties associated with object types stored in a database (e.g., a database stored in memory140). Properties may include object shape, dimensions, texture, position (e.g., relative to vehicle200), and the like. Thus, processing unit110may use one or more sets of criteria to reject false candidates from the set of candidate objects. At step544, processing unit110may analyze multiple frames of images to determine whether objects in the set of candidate objects represent vehicles and/or pedestrians. For example, processing unit110may track a detected candidate object across consecutive frames and accumulate frame-by-frame data associated with the detected object (e.g., size, position relative to vehicle200, etc.). Additionally, processing unit110may estimate parameters for the detected object and compare the object's frame-by-frame position data to a predicted position. At step546, processing unit110may construct a set of measurements for the detected objects. Such measurements may include, for example, position, velocity, and acceleration values (relative to vehicle200) associated with the detected objects. In some embodiments, processing unit110may construct the measurements based on estimation techniques using a series of time-based observations such as Kalman filters or linear quadratic estimation (LQE), and/or based on available modeling data for different object types (e.g., cars, trucks, pedestrians, bicycles, road signs, etc.). The Kalman filters may be based on a measurement of an object's scale, where the scale measurement is proportional to a time to collision (e.g., the amount of time for vehicle200to reach the object). Thus, by performing steps540-546, processing unit110may identify vehicles and pedestrians appearing within the set of captured images and derive information (e.g., position, speed, size) associated with the vehicles and pedestrians. Based on the identification and the derived information, processing unit110may cause one or more navigational responses in vehicle200, as described in connection withFIG.5A, above. At step548, processing unit110may perform an optical flow analysis of one or more images to reduce the probabilities of detecting a “false hit” and missing a candidate object that represents a vehicle or pedestrian. The optical flow analysis may refer to, for example, analyzing motion patterns relative to vehicle200in the one or more images associated with other vehicles and pedestrians, and that are distinct from road surface motion. Processing unit110may calculate the motion of candidate objects by observing the different positions of the objects across multiple image frames, which are captured at different times. Processing unit110may use the position and time values as inputs into mathematical models for calculating the motion of the candidate objects. Thus, optical flow analysis may provide another method of detecting vehicles and pedestrians that are nearby vehicle200. Processing unit110may perform optical flow analysis in combination with steps540-546to provide redundancy for detecting vehicles and pedestrians and increase the reliability of system100. FIG.5Cis a flowchart showing an exemplary process500C for detecting road marks and/or lane geometry information in a set of images, consistent with disclosed embodiments. Processing unit110may execute monocular image analysis module402to implement process500C. At step550, processing unit110may detect a set of objects by scanning one or more images. To detect segments of lane markings, lane geometry information, and other pertinent road marks, processing unit110may filter the set of objects to exclude those determined to be irrelevant (e.g., minor potholes, small rocks, etc.). At step552, processing unit110may group together the segments detected in step550belonging to the same road mark or lane mark. Based on the grouping, processing unit110may develop a model to represent the detected segments, such as a mathematical model. At step554, processing unit110may construct a set of measurements associated with the detected segments. In some embodiments, processing unit110may create a projection of the detected segments from the image plane onto the real-world plane. The projection may be characterized using a 3rd-degree polynomial having coefficients corresponding to physical properties such as the position, slope, curvature, and curvature derivative of the detected road. In generating the projection, processing unit110may take into account changes in the road surface, as well as pitch and roll rates associated with vehicle200. In addition, processing unit110may model the road elevation by analyzing position and motion cues present on the road surface. Further, processing unit110may estimate the pitch and roll rates associated with vehicle200by tracking a set of feature points in the one or more images. At step556, processing unit110may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit110performs multi-frame analysis, the set of measurements constructed at step554may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps550,552,554, and556, processing unit110may identify road marks appearing within the set of captured images and derive lane geometry information. Based on the identification and the derived information, processing unit110may cause one or more navigational responses in vehicle200, as described in connection withFIG.5A, above. At step558, processing unit110may consider additional sources of information to further develop a safety model for vehicle200in the context of its surroundings. Processing unit110may use the safety model to define a context in which system100may execute autonomous control of vehicle200in a safe manner. To develop the safety model, in some embodiments, processing unit110may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database160). By considering additional sources of information, processing unit110may provide redundancy for detecting road marks and lane geometry and increase the reliability of system100. FIG.5Dis a flowchart showing an exemplary process500D for detecting traffic lights in a set of images, consistent with disclosed embodiments. Processing unit110may execute monocular image analysis module402to implement process500D. At step560, processing unit110may scan the set of images and identify objects appearing at locations in the images likely to contain traffic lights. For example, processing unit110may filter the identified objects to construct a set of candidate objects, excluding those objects unlikely to correspond to traffic lights. The filtering may be done based on various properties associated with traffic lights, such as shape, dimensions, texture, position (e.g., relative to vehicle200), and the like. Such properties may be based on multiple examples of traffic lights and traffic control signals and stored in a database. In some embodiments, processing unit110may perform multi-frame analysis on the set of candidate objects reflecting possible traffic lights. For example, processing unit110may track the candidate objects across consecutive image frames, estimate the real-world position of the candidate objects, and filter out those objects that are moving (which are unlikely to be traffic lights). In some embodiments, processing unit110may perform color analysis on the candidate objects and identify the relative position of the detected colors appearing inside possible traffic lights. At step562, processing unit110may analyze the geometry of a junction. The analysis may be based on any combination of: (i) the number of lanes detected on either side of vehicle200, (ii) markings (such as arrow marks) detected on the road, and (iii) descriptions of the junction extracted from map data (such as data from map database160). Processing unit110may conduct the analysis using information derived from execution of monocular analysis module402. In addition, Processing unit110may determine a correspondence between the traffic lights detected at step560and the lanes appearing near vehicle200. As vehicle200approaches the junction, at step564, processing unit110may update the confidence level associated with the analyzed junction geometry and the detected traffic lights. For instance, the number of traffic lights estimated to appear at the junction as compared with the number actually appearing at the junction may impact the confidence level. Thus, based on the confidence level, processing unit110may delegate control to the driver of vehicle200in order to improve safety conditions. By performing steps560,562, and564, processing unit110may identify traffic lights appearing within the set of captured images and analyze junction geometry information. Based on the identification and the analysis, processing unit110may cause one or more navigational responses in vehicle200, as described in connection withFIG.5A, above. FIG.5Eis a flowchart showing an exemplary process500E for causing one or more navigational responses in vehicle200based on a vehicle path, consistent with the disclosed embodiments. At step570, processing unit110may construct an initial vehicle path associated with vehicle200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance dibetween two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit110may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit110may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane). The offset may be in a direction perpendicular to a segment between any two points in the vehicle path. In another embodiment, processing unit110may use one polynomial and an estimated lane width to offset each point of the vehicle path by half the estimated lane width plus a predetermined offset (e.g., a smart lane offset). At step572, processing unit110may update the vehicle path constructed at step570. Processing unit110may reconstruct the vehicle path constructed at step570using a higher resolution, such that the distance dkbetween two points in the set of points representing the vehicle path is less than the distance didescribed above. For example, the distance dkmay fall in the range of 0.1 to 0.3 meters. Processing unit110may reconstruct the vehicle path using a parabolic spline algorithm, which may yield a cumulative distance vector S corresponding to the total length of the vehicle path (i.e., based on the set of points representing the vehicle path). At step574, processing unit110may determine a look-ahead point (expressed in coordinates as (x1, z1)) based on the updated vehicle path constructed at step572. Processing unit110may extract the look-ahead point from the cumulative distance vector S, and the look-ahead point may be associated with a look-ahead distance and look-ahead time. The look-ahead distance, which may have a lower bound ranging from 10 to 20 meters, may be calculated as the product of the speed of vehicle200and the look-ahead time. For example, as the speed of vehicle200decreases, the look-ahead distance may also decrease (e.g., until it reaches the lower bound). The look-ahead time, which may range from 0.5 to 1.5 seconds, may be inversely proportional to the gain of one or more control loops associated with causing a navigational response in vehicle200, such as the heading error tracking control loop. For example, the gain of the heading error tracking control loop may depend on the bandwidth of a yaw rate loop, a steering actuator loop, car lateral dynamics, and the like. Thus, the higher the gain of the heading error tracking control loop, the lower the look-ahead time. At step576, processing unit110may determine a heading error and yaw rate command based on the look-ahead point determined at step574. Processing unit110may determine the heading error by calculating the arctangent of the look-ahead point, e.g., arctan (x1/z1). Processing unit110may determine the yaw rate command as the product of the heading error and a high-level control gain. The high-level control gain may be equal to: (2/look-ahead time), if the look-ahead distance is not at the lower bound. Otherwise, the high-level control gain may be equal to: (2* speed of vehicle200/look-ahead distance). FIG.5Fis a flowchart showing an exemplary process500F for determining whether a leading vehicle is changing lanes, consistent with the disclosed embodiments. At step580, processing unit110may determine navigation information associated with a leading vehicle (e.g., a vehicle traveling ahead of vehicle200). For example, processing unit110may determine the position, velocity (e.g., direction and speed), and/or acceleration of the leading vehicle, using the techniques described in connection withFIGS.5A and5B, above. Processing unit110may also determine one or more road polynomials, a look-ahead point (associated with vehicle200), and/or a snail trail (e.g., a set of points describing a path taken by the leading vehicle), using the techniques described in connection withFIG.5E, above. At step582, processing unit110may analyze the navigation information determined at step580. In one embodiment, processing unit110may calculate the distance between a snail trail and a road polynomial (e.g., along the trail). If the variance of this distance along the trail exceeds a predetermined threshold (for example, 0.1 to 0.2 meters on a straight road, 0.3 to 0.4 meters on a moderately curvy road, and 0.5 to 0.6 meters on a road with sharp curves), processing unit110may determine that the leading vehicle is likely changing lanes. In the case where multiple vehicles are detected traveling ahead of vehicle200, processing unit110may compare the snail trails associated with each vehicle. Based on the comparison, processing unit110may determine that a vehicle whose snail trail does not match with the snail trails of the other vehicles is likely changing lanes. Processing unit110may additionally compare the curvature of the snail trail (associated with the leading vehicle) with the expected curvature of the road segment in which the leading vehicle is traveling. The expected curvature may be extracted from map data (e.g., data from map database160), from road polynomials, from other vehicles' snail trails, from prior knowledge about the road, and the like. If the difference in curvature of the snail trail and the expected curvature of the road segment exceeds a predetermined threshold, processing unit110may determine that the leading vehicle is likely changing lanes. In another embodiment, processing unit110may compare the leading vehicle's instantaneous position with the look-ahead point (associated with vehicle200) over a specific period of time (e.g., 0.5 to 1.5 seconds). If the distance between the leading vehicle's instantaneous position and the look-ahead point varies during the specific period of time, and the cumulative sum of variation exceeds a predetermined threshold (for example, 0.3 to 0.4 meters on a straight road, 0.7 to 0.8 meters on a moderately curvy road, and 1.3 to 1.7 meters on a road with sharp curves), processing unit110may determine that the leading vehicle is likely changing lanes. In another embodiment, processing unit110may analyze the geometry of the snail trail by comparing the lateral distance traveled along the trail with the expected curvature of the snail trail. The expected radius of curvature may be determined according to the calculation: (δz2, δx2)/2/(δx), where δxrepresents the lateral distance traveled and δzrepresents the longitudinal distance traveled. If the difference between the lateral distance traveled and the expected curvature exceeds a predetermined threshold (e.g., 500 to 700 meters), processing unit110may determine that the leading vehicle is likely changing lanes. In another embodiment, processing unit110may analyze the position of the leading vehicle. If the position of the leading vehicle obscures a road polynomial (e.g., the leading vehicle is overlaid on top of the road polynomial), then processing unit110may determine that the leading vehicle is likely changing lanes. In the case where the position of the leading vehicle is such that, another vehicle is detected ahead of the leading vehicle and the snail trails of the two vehicles are not parallel, processing unit110may determine that the (closer) leading vehicle is likely changing lanes. At step584, processing unit110may determine whether or not leading vehicle200is changing lanes based on the analysis performed at step582. For example, processing unit110may make the determination based on a weighted average of the individual analyses performed at step582. Under such a scheme, for example, a decision by processing unit110that the leading vehicle is likely changing lanes based on a particular type of analysis may be assigned a value of “1” (and “0” to represent a determination that the leading vehicle is not likely changing lanes). Different analyses performed at step582may be assigned different weights, and the disclosed embodiments are not limited to any particular combination of analyses and weights. FIG.6is a flowchart showing an exemplary process600for causing one or more navigational responses based on stereo image analysis, consistent with disclosed embodiments. At step610, processing unit110may receive a first and second plurality of images via data interface128. For example, cameras included in image acquisition unit120(such as image capture devices122and124having fields of view202and204) may capture a first and second plurality of images of an area forward of vehicle200and transmit them over a digital connection (e.g., USB, wireless, Bluetooth, etc.) to processing unit110. In some embodiments, processing unit110may receive the first and second plurality of images via two or more data interfaces. The disclosed embodiments are not limited to any particular data interface configurations or protocols. At step620, processing unit110may execute stereo image analysis module404to perform stereo image analysis of the first and second plurality of images to create a 3D map of the road in front of the vehicle and detect features within the images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, road hazards, and the like. Stereo image analysis may be performed in a manner similar to the steps described in connection withFIGS.5A-5D, above. For example, processing unit110may execute stereo image analysis module404to detect candidate objects (e.g., vehicles, pedestrians, road marks, traffic lights, road hazards, etc.) within the first and second plurality of images, filter out a subset of the candidate objects based on various criteria, and perform multi-frame analysis, construct measurements, and determine a confidence level for the remaining candidate objects. In performing the steps above, processing unit110may consider information from both the first and second plurality of images, rather than information from one set of images alone. For example, processing unit110may analyze the differences in pixel-level data (or other data subsets from among the two streams of captured images) for a candidate object appearing in both the first and second plurality of images. As another example, processing unit110may estimate a position and/or velocity of a candidate object (e.g., relative to vehicle200) by observing that the object appears in one of the plurality of images but not the other or relative to other differences that may exist relative to objects appearing if the two image streams. For example, position, velocity, and/or acceleration relative to vehicle200may be determined based on trajectories, positions, movement characteristics, etc. of features associated with an object appearing in one or both of the image streams. At step630, processing unit110may execute navigational response module408to cause one or more navigational responses in vehicle200based on the analysis performed at step620and the techniques as described above in connection withFIG.4. Navigational responses may include, for example, a turn, a lane shift, a change in acceleration, a change in velocity, braking, and the like. In some embodiments, processing unit110may use data derived from execution of velocity and acceleration module406to cause the one or more navigational responses. Additionally, multiple navigational responses may occur simultaneously, in sequence, or any combination thereof. FIG.7is a flowchart showing an exemplary process700for causing one or more navigational responses based on an analysis of three sets of images, consistent with disclosed embodiments. At step710, processing unit110may receive a first, second, and third plurality of images via data interface128. For instance, cameras included in image acquisition unit120(such as image capture devices122,124, and126having fields of view202,204, and206) may capture a first, second, and third plurality of images of an area forward and/or to the side of vehicle200and transmit them over a digital connection (e.g., USB, wireless, Bluetooth, etc.) to processing unit110. In some embodiments, processing unit110may receive the first, second, and third plurality of images via three or more data interfaces. For example, each of image capture devices122,124,126may have an associated data interface for communicating data to processing unit110. The disclosed embodiments are not limited to any particular data interface configurations or protocols. At step720, processing unit110may analyze the first, second, and third plurality of images to detect features within the images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, road hazards, and the like. The analysis may be performed in a manner similar to the steps described in connection withFIGS.5A-5D and6, above. For instance, processing unit110may perform monocular image analysis (e.g., via execution of monocular image analysis module402and based on the steps described in connection withFIGS.5A-5D, above) on each of the first, second, and third plurality of images. Alternatively, processing unit110may perform stereo image analysis (e.g., via execution of stereo image analysis module404and based on the steps described in connection withFIG.6, above) on the first and second plurality of images, the second and third plurality of images, and/or the first and third plurality of images. The processed information corresponding to the analysis of the first, second, and/or third plurality of images may be combined. In some embodiments, processing unit110may perform a combination of monocular and stereo image analyses. For example, processing unit110may perform monocular image analysis (e.g., via execution of monocular image analysis module402) on the first plurality of images and stereo image analysis (e.g., via execution of stereo image analysis module404) on the second and third plurality of images. The configuration of image capture devices122,124, and126—including their respective locations and fields of view202,204, and206—may influence the types of analyses conducted on the first, second, and third plurality of images. The disclosed embodiments are not limited to a particular configuration of image capture devices122,124, and126, or the types of analyses conducted on the first, second, and third plurality of images. In some embodiments, processing unit110may perform testing on system100based on the images acquired and analyzed at steps710and720. Such testing may provide an indicator of the overall performance of system100for certain configurations of image capture devices122,124, and126. For example, processing unit110may determine the proportion of “false hits” (e.g., cases where system100incorrectly determined the presence of a vehicle or pedestrian) and “misses.” At step730, processing unit110may cause one or more navigational responses in vehicle200based on information derived from two of the first, second, and third plurality of images. Selection of two of the first, second, and third plurality of images may depend on various factors, such as, for example, the number, types, and sizes of objects detected in each of the plurality of images. Processing unit110may also make the selection based on image quality and resolution, the effective field of view reflected in the images, the number of captured frames, the extent to which one or more objects of interest actually appear in the frames (e.g., the percentage of frames in which an object appears, the proportion of the object that appears in each such frame, etc.), and the like. In some embodiments, processing unit110may select information derived from two of the first, second, and third plurality of images by determining the extent to which information derived from one image source is consistent with information derived from other image sources. For example, processing unit110may combine the processed information derived from each of image capture devices122,124, and126(whether by monocular analysis, stereo analysis, or any combination of the two) and determine visual indicators (e.g., lane markings, a detected vehicle and its location and/or path, a detected traffic light, etc.) that are consistent across the images captured from each of image capture devices122,124, and126. Processing unit110may also exclude information that is inconsistent across the captured images (e.g., a vehicle changing lanes, a lane model indicating a vehicle that is too close to vehicle200, etc.). Thus, processing unit110may select information derived from two of the first, second, and third plurality of images based on the determinations of consistent and inconsistent information. Navigational responses may include, for example, a turn, a lane shift, a change in acceleration, and the like. Processing unit110may cause the one or more navigational responses based on the analysis performed at step720and the techniques as described above in connection withFIG.4. Processing unit110may also use data derived from execution of velocity and acceleration module406to cause the one or more navigational responses. In some embodiments, processing unit110may cause the one or more navigational responses based on a relative position, relative velocity, and/or relative acceleration between vehicle200and an object detected within any of the first, second, and third plurality of images. Multiple navigational responses may occur simultaneously, in sequence, or any combination thereof. Sparse Road Model for Autonomous Vehicle Navigation In some embodiments, the disclosed systems and methods may use a sparse map for autonomous vehicle navigation. In particular, the sparse map may be for autonomous vehicle navigation along a road segment. For example, the sparse map may provide sufficient information for navigating an autonomous vehicle without storing and/or updating a large quantity of data. As discussed below in further detail, an autonomous vehicle may use the sparse map to navigate one or more roads based on one or more stored trajectories. Sparse Map for Autonomous Vehicle Navigation In some embodiments, the disclosed systems and methods may generate a sparse map for autonomous vehicle navigation. For example, the sparse map may provide sufficient information for navigation without requiring excessive data storage or data transfer rates. As discussed below in further detail, a vehicle (which may be an autonomous vehicle) may use the sparse map to navigate one or more roads. For example, in some embodiments, the sparse map may include data related to a road and potentially landmarks along the road that may be sufficient for vehicle navigation, but which also exhibit small data footprints. For example, the sparse data maps described in detail below may require significantly less storage space and data transfer bandwidth as compared with digital maps including detailed map information, such as image data collected along a road. For example, rather than storing detailed representations of a road segment, the sparse data map may store three-dimensional polynomial representations of preferred vehicle paths along a road. These paths may require very little data storage space. Further, in the described sparse data maps, landmarks may be identified and included in the sparse map road model to aid in navigation. These landmarks may be located at any spacing suitable for enabling vehicle navigation, but in some cases, such landmarks need not be identified and included in the model at high densities and short spacings. Rather, in some cases, navigation may be possible based on landmarks that are spaced apart by at least 50 meters, at least 100 meters, at least 500 meters, at least 1 kilometer, or at least 2 kilometers. As will be discussed in more detail in other sections, the sparse map may be generated based on data collected or measured by vehicles equipped with various sensors and devices, such as image capture devices, Global Positioning System sensors, motion sensors, etc., as the vehicles travel along roadways. In some cases, the sparse map may be generated based on data collected during multiple drives of one or more vehicles along a particular roadway. Generating a sparse map using multiple drives of one or more vehicles may be referred to as “crowdsourcing” a sparse map. Consistent with disclosed embodiments, an autonomous vehicle system may use a sparse map for navigation. For example, the disclosed systems and methods may distribute a sparse map for generating a road navigation model for an autonomous vehicle and may navigate an autonomous vehicle along a road segment using a sparse map and/or a generated road navigation model. Sparse maps consistent with the present disclosure may include one or more three-dimensional contours that may represent predetermined trajectories that autonomous vehicles may traverse as they move along associated road segments. Sparse maps consistent with the present disclosure may also include data representing one or more road features. Such road features may include recognized landmarks, road signature profiles, and any other road-related features useful in navigating a vehicle. Sparse maps consistent with the present disclosure may enable autonomous navigation of a vehicle based on relatively small amounts of data included in the sparse map. For example, rather than including detailed representations of a road, such as road edges, road curvature, images associated with road segments, or data detailing other physical features associated with a road segment, the disclosed embodiments of the sparse map may require relatively little storage space (and relatively little bandwidth when portions of the sparse map are transferred to a vehicle) but may still adequately provide for autonomous vehicle navigation. The small data footprint of the disclosed sparse maps, discussed in further detail below, may be achieved in some embodiments by storing representations of road-related elements that require small amounts of data but still enable autonomous navigation. For example, rather than storing detailed representations of various aspects of a road, the disclosed sparse maps may store polynomial representations of one or more trajectories that a vehicle may follow along the road. Thus, rather than storing (or having to transfer) details regarding the physical nature of the road to enable navigation along the road, using the disclosed sparse maps, a vehicle may be navigated along a particular road segment without, in some cases, having to interpret physical aspects of the road, but rather, by aligning its path of travel with a trajectory (e.g., a polynomial spline) along the particular road segment. In this way, the vehicle may be navigated based mainly upon the stored trajectory (e.g., a polynomial spline) that may require much less storage space than an approach involving storage of roadway images, road parameters, road layout, etc. In addition to the stored polynomial representations of trajectories along a road segment, the disclosed sparse maps may also include small data objects that may represent a road feature. In some embodiments, the small data objects may include digital signatures, which are derived from a digital image (or a digital signal) that was obtained by a sensor (e.g., a camera or other sensor, such as a suspension sensor) onboard a vehicle traveling along the road segment. The digital signature may have a reduced size relative to the signal that was acquired by the sensor. In some embodiments, the digital signature may be created to be compatible with a classifier function that is configured to detect and to identify the road feature from the signal that is acquired by the sensor, for example, during a subsequent drive. In some embodiments, a digital signature may be created such that the digital signature has a footprint that is as small as possible, while retaining the ability to correlate or match the road feature with the stored signature based on an image (or a digital signal generated by a sensor, if the stored signature is not based on an image and/or includes other data) of the road feature that is captured by a camera onboard a vehicle traveling along the same road segment at a subsequent time. In some embodiments, a size of the data objects may be further associated with a uniqueness of the road feature. For example, for a road feature that is detectable by a camera onboard a vehicle, and where the camera system onboard the vehicle is coupled to a classifier that is capable of distinguishing the image data corresponding to that road feature as being associated with a particular type of road feature, for example, a road sign, and where such a road sign is locally unique in that area (e.g., there is no identical road sign or road sign of the same type nearby), it may be sufficient to store data indicating the type of the road feature and its location. As will be discussed in further detail below, road features (e.g., landmarks along a road segment) may be stored as small data objects that may represent a road feature in relatively few bytes, while at the same time providing sufficient information for recognizing and using such a feature for navigation. In one example, a road sign may be identified as a recognized landmark on which navigation of a vehicle may be based. A representation of the road sign may be stored in the sparse map to include, e.g., a few bytes of data indicating a type of landmark (e.g., a stop sign) and a few bytes of data indicating a location of the landmark (e.g., coordinates). Navigating based on such data-light representations of the landmarks (e.g., using representations sufficient for locating, recognizing, and navigating based upon the landmarks) may provide a desired level of navigational functionality associated with sparse maps without significantly increasing the data overhead associated with the sparse maps. This lean representation of landmarks (and other road features) may take advantage of the sensors and processors included onboard such vehicles that are configured to detect, identify, and/or classify certain road features. When, for example, a sign or even a particular type of a sign is locally unique (e.g., when there is no other sign or no other sign of the same type) in a given area, the sparse map may use data indicating a type of a landmark (a sign or a specific type of sign), and during navigation (e.g., autonomous navigation) when a camera onboard an autonomous vehicle captures an image of the area including a sign (or of a specific type of sign), the processor may process the image, detect the sign (if indeed present in the image), classify the image as a sign (or as a specific type of sign), and correlate the location of the image with the location of the sign as stored in the sparse map. The sparse map may include any suitable representation of objects identified along a road segment. In some cases, the objects may be referred to as semantic objects or non-semantic objects. Semantic objects may include, for example, objects associated with a predetermined type classification. This type classification may be useful in reducing the amount of data required to describe the semantic object recognized in an environment, which can be beneficial both in the harvesting phase (e.g., to reduce costs associated with bandwidth use for transferring drive information from a plurality of harvesting vehicles to a server) and during the navigation phase (e.g., reduction of map data can speed transfer of map tiles from a server to a navigating vehicle and can also reduce costs associated with bandwidth use for such transfers). Semantic object classification types may be assigned to any type of objects or features that are expected to be encountered along a roadway. Semantic objects may further be divided into two or more logical groups. For example, in some cases, one group of semantic object types may be associated with predetermined dimensions. Such semantic objects may include certain speed limit signs, yield signs, merge signs, stop signs, traffic lights, directional arrows on a roadway, manhole covers, or any other type of object that may be associated with a standardized size. One benefit offered by such semantic objects is that very little data may be needed to represent/fully define the objects. For example, if a standardized size of a speed limit size is known, then a harvesting vehicle may need only identify (through analysis of a captured image) the presence of a speed limit sign (a recognized type) along with an indication of a position of the detected speed limit sign (e.g., a 2D position in the captured image (or, alternatively, a 3D position in real world coordinates) of a center of the sign or a certain corner of the sign) to provide sufficient information for map generation on the server side. Where 2D image positions are transmitted to the server, a position associated with the captured image where the sign was detected may also be transmitted so the server can determine a real-world position of the sign (e.g., through structure in motion techniques using multiple captured images from one or more harvesting vehicles). Even with this limited information (requiring just a few bytes to define each detected object), the server may construct the map including a fully represented speed limit sign based on the type classification (representative of a speed limit sign) received from one or more harvesting vehicles along with the position information for the detected sign. Semantic objects may also include other recognized object or feature types that are not associated with certain standardized characteristics. Such objects or features may include potholes, tar seams, light poles, non-standardized signs, curbs, trees, tree branches, or any other type of recognized object type with one or more variable characteristics (e.g., variable dimensions). In such cases, in addition to transmitting to a server an indication of the detected object or feature type (e.g., pothole, pole, etc.) and position information for the detected object or feature, a harvesting vehicle may also transmit an indication of a size of the object or feature. The size may be expressed in 2D image dimensions (e.g., with a bounding box or one or more dimension values) or real-world dimensions (determined through structure in motion calculations, based on LIDAR or RADAR system outputs, based on trained neural network outputs, etc.). Non-semantic objects or features may include any detectable objects or features that fall outside of a recognized category or type, but that still may provide valuable information in map generation. In some cases, such non-semantic features may include a detected corner of a building or a corner of a detected window of a building, a unique stone or object near a roadway, a concrete splatter in a roadway shoulder, or any other detectable object or feature. Upon detecting such an object or feature one or more harvesting vehicles may transmit to a map generation server a location of one or more points (2D image points or 3D real world points) associated with the detected object/feature. Additionally, a compressed or simplified image segment (e.g., an image hash) may be generated for a region of the captured image including the detected object or feature. This image hash may be calculated based on a predetermined image processing algorithm and may form an effective signature for the detected non-semantic object or feature. Such a signature may be useful for navigation relative to a sparse map including the non-semantic feature or object, as a vehicle traversing the roadway may apply an algorithm similar to the algorithm used to generate the image hash in order to confirm/verify the presence in a captured image of the mapped non-semantic feature or object. Using this technique, non-semantic features may add to the richness of the sparse maps (e.g., to enhance their usefulness in navigation) without adding significant data overhead. As noted, target trajectories may be stored in the sparse map. These target trajectories (e.g., 3D splines) may represent the preferred or recommended paths for each available lane of a roadway, each valid pathway through a junction, for merges and exits, etc. In addition to target trajectories, other road feature may also be detected, harvested, and incorporated in the sparse maps in the form of representative splines. Such features may include, for example, road edges, lane markings, curbs, guardrails, or any other objects or features that extend along a roadway or road segment. Generating a Sparse Map In some embodiments, a sparse map may include at least one line representation of a road surface feature extending along a road segment and a plurality of landmarks associated with the road segment. In certain aspects, the sparse map may be generated via “crowdsourcing,” for example, through image analysis of a plurality of images acquired as one or more vehicles traverse the road segment. FIG.8shows a sparse map800that one or more vehicles, e.g., vehicle200(which may be an autonomous vehicle), may access for providing autonomous vehicle navigation. Sparse map800may be stored in a memory, such as memory140or150. Such memory devices may include any types of non-transitory storage devices or computer-readable media. For example, in some embodiments, memory140or150may include hard drives, compact discs, flash memory, magnetic based memory devices, optical based memory devices, etc. In some embodiments, sparse map800may be stored in a database (e.g., map database160) that may be stored in memory140or150, or other types of storage devices. In some embodiments, sparse map800may be stored on a storage device or a non-transitory computer-readable medium provided onboard vehicle200(e.g., a storage device included in a navigation system onboard vehicle200). A processor (e.g., processing unit110) provided on vehicle200may access sparse map800stored in the storage device or computer-readable medium provided onboard vehicle200in order to generate navigational instructions for guiding the autonomous vehicle200as the vehicle traverses a road segment. Sparse map800need not be stored locally with respect to a vehicle, however. In some embodiments, sparse map800may be stored on a storage device or computer-readable medium provided on a remote server that communicates with vehicle200or a device associated with vehicle200. A processor (e.g., processing unit110) provided on vehicle200may receive data included in sparse map800from the remote server and may execute the data for guiding the autonomous driving of vehicle200. In such embodiments, the remote server may store all of sparse map800or only a portion thereof. Accordingly, the storage device or computer-readable medium provided onboard vehicle200and/or onboard one or more additional vehicles may store the remaining portion(s) of sparse map800. Furthermore, in such embodiments, sparse map800may be made accessible to a plurality of vehicles traversing various road segments (e.g., tens, hundreds, thousands, or millions of vehicles, etc.). It should be noted also that sparse map800may include multiple sub-maps. For example, in some embodiments, sparse map800may include hundreds, thousands, millions, or more, of sub-maps (e.g., map tiles) that may be used in navigating a vehicle. Such sub-maps may be referred to as local maps or map tiles, and a vehicle traveling along a roadway may access any number of local maps relevant to a location in which the vehicle is traveling. The local map sections of sparse map800may be stored with a Global Navigation Satellite System (GNSS) key as an index to the database of sparse map800. Thus, while computation of steering angles for navigating a host vehicle in the present system may be performed without reliance upon a GNSS position of the host vehicle, road features, or landmarks, such GNSS information may be used for retrieval of relevant local maps. In general, sparse map800may be generated based on data (e.g., drive information) collected from one or more vehicles as they travel along roadways. For example, using sensors aboard the one or more vehicles (e.g., cameras, speedometers, GPS, accelerometers, etc.), the trajectories that the one or more vehicles travel along a roadway may be recorded, and the polynomial representation of a preferred trajectory for vehicles making subsequent trips along the roadway may be determined based on the collected trajectories travelled by the one or more vehicles. Similarly, data collected by the one or more vehicles may aid in identifying potential landmarks along a particular roadway. Data collected from traversing vehicles may also be used to identify road profile information, such as road width profiles, road roughness profiles, traffic line spacing profiles, road conditions, etc. Using the collected information, sparse map800may be generated and distributed (e.g., for local storage or via on-the-fly data transmission) for use in navigating one or more autonomous vehicles. However, in some embodiments, map generation may not end upon initial generation of the map. As will be discussed in greater detail below, sparse map800may be continuously or periodically updated based on data collected from vehicles as those vehicles continue to traverse roadways included in sparse map800. Data recorded in sparse map800may include position information based on Global Positioning System (GPS) data. For example, location information may be included in sparse map800for various map elements, including, for example, landmark locations, road profile locations, etc. Locations for map elements included in sparse map800may be obtained using GPS data collected from vehicles traversing a roadway. For example, a vehicle passing an identified landmark may determine a location of the identified landmark using GPS position information associated with the vehicle and a determination of a location of the identified landmark relative to the vehicle (e.g., based on image analysis of data collected from one or more cameras on board the vehicle). Such location determinations of an identified landmark (or any other feature included in sparse map800) may be repeated as additional vehicles pass the location of the identified landmark. Some or all of the additional location determinations may be used to refine the location information stored in sparse map800relative to the identified landmark. For example, in some embodiments, multiple position measurements relative to a particular feature stored in sparse map800may be averaged together. Any other mathematical operations, however, may also be used to refine a stored location of a map element based on a plurality of determined locations for the map element. In a particular example, harvesting vehicles may traverse a particular road segment. Each harvesting vehicle captures images of their respective environments. The images may be collected at any suitable frame capture rate (e.g., 9 Hz, etc.). Image analysis processor(s) aboard each harvesting vehicle analyze the captured images to detect the presence of semantic and/or non-semantic features/objects. At a high level, the harvesting vehicles transmit to a mapping-server indications of detections of the semantic and/or non-semantic objects/features along with positions associated with those objects/features. In more detail, type indicators, dimension indicators, etc. may be transmitted together with the position information. The position information may include any suitable information for enabling the mapping server to aggregate the detected objects/features into a sparse map useful in navigation. In some cases, the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image where the semantic or non-semantic features/objects were detected. Such image positions may correspond to a center of the feature/object, a corner, etc. In this scenario, to aid the mapping server in reconstructing the drive information and aligning the drive information from multiple harvesting vehicles, each harvesting vehicle may also provide the server with a location (e.g., a GPS location) where each image was captured. In other cases, the harvesting vehicle may provide to the server one or more 3D real world points associated with the detected objects/features. Such 3D points may be relative to a predetermined origin (such as an origin of a drive segment) and may be determined through any suitable technique. In some cases, a structure in motion technique may be used to determine the 3D real world position of a detected object/feature. For example, a certain object such as a particular speed limit sign may be detected in two or more captured images. Using information such as the known ego motion (speed, trajectory, GPS position, etc.) of the harvesting vehicle between the captured images, along with observed changes of the speed limit sign in the captured images (change in X-Y pixel location, change in size, etc.), the real-world position of one or more points associated with the speed limit sign may be determined and passed along to the mapping server. Such an approach is optional, as it requires more computation on the part of the harvesting vehicle systems. The sparse map of the disclosed embodiments may enable autonomous navigation of a vehicle using relatively small amounts of stored data. In some embodiments, sparse map800may have a data density (e.g., including data representing the target trajectories, landmarks, and any other stored road features) of less than 2 MB per kilometer of roads, less than 1 MB per kilometer of roads, less than 500 kB per kilometer of roads, or less than 100 kB per kilometer of roads. In some embodiments, the data density of sparse map800may be less than 10 kB per kilometer of roads or even less than 2 kB per kilometer of roads (e.g., 1.6 kB per kilometer), or no more than 10 kB per kilometer of roads, or no more than 20 kB per kilometer of roads. In some embodiments, most, if not all, of the roadways of the United States may be navigated autonomously using a sparse map having a total of 4 GB or less of data. These data density values may represent an average over an entire sparse map800, over a local map within sparse map800, and/or over a particular road segment within sparse map800. As noted, sparse map800may include representations of a plurality of target trajectories810for guiding autonomous driving or navigation along a road segment. Such target trajectories may be stored as three-dimensional splines. The target trajectories stored in sparse map800may be determined based on two or more reconstructed trajectories of prior traversals of vehicles along a particular road segment, for example. A road segment may be associated with a single target trajectory or multiple target trajectories. For example, on a two lane road, a first target trajectory may be stored to represent an intended path of travel along the road in a first direction, and a second target trajectory may be stored to represent an intended path of travel along the road in another direction (e.g., opposite to the first direction). Additional target trajectories may be stored with respect to a particular road segment. For example, on a multi-lane road one or more target trajectories may be stored representing intended paths of travel for vehicles in one or more lanes associated with the multi-lane road. In some embodiments, each lane of a multi-lane road may be associated with its own target trajectory. In other embodiments, there may be fewer target trajectories stored than lanes present on a multi-lane road. In such cases, a vehicle navigating the multi-lane road may use any of the stored target trajectories to guides its navigation by taking into account an amount of lane offset from a lane for which a target trajectory is stored (e.g., if a vehicle is traveling in the left most lane of a three lane highway, and a target trajectory is stored only for the middle lane of the highway, the vehicle may navigate using the target trajectory of the middle lane by accounting for the amount of lane offset between the middle lane and the left-most lane when generating navigational instructions). In some embodiments, the target trajectory may represent an ideal path that a vehicle should take as the vehicle travels. The target trajectory may be located, for example, at an approximate center of a lane of travel. In other cases, the target trajectory may be located elsewhere relative to a road segment. For example, a target trajectory may approximately coincide with a center of a road, an edge of a road, or an edge of a lane, etc. In such cases, navigation based on the target trajectory may include a determined amount of offset to be maintained relative to the location of the target trajectory. Moreover, in some embodiments, the determined amount of offset to be maintained relative to the location of the target trajectory may differ based on a type of vehicle (e.g., a passenger vehicle including two axles may have a different offset from a truck including more than two axles along at least a portion of the target trajectory). Sparse map800may also include data relating to a plurality of predetermined landmarks820associated with particular road segments, local maps, etc. As discussed in greater detail below, these landmarks may be used in navigation of the autonomous vehicle. For example, in some embodiments, the landmarks may be used to determine a current position of the vehicle relative to a stored target trajectory. With this position information, the autonomous vehicle may be able to adjust a heading direction to match a direction of the target trajectory at the determined location. The plurality of landmarks820may be identified and stored in sparse map800at any suitable spacing. In some embodiments, landmarks may be stored at relatively high densities (e.g., every few meters or more). In some embodiments, however, significantly larger landmark spacing values may be employed. For example, in sparse map800, identified (or recognized) landmarks may be spaced apart by 10 meters, 20 meters, 50 meters, 100 meters, 1 kilometer, or 2 kilometers. In some cases, the identified landmarks may be located at distances of even more than 2 kilometers apart. Between landmarks, and therefore between determinations of vehicle position relative to a target trajectory, the vehicle may navigate based on dead reckoning in which the vehicle uses sensors to determine its ego motion and estimate its position relative to the target trajectory. Because errors may accumulate during navigation by dead reckoning, over time the position determinations relative to the target trajectory may become increasingly less accurate. The vehicle may use landmarks occurring in sparse map800(and their known locations) to remove the dead reckoning-induced errors in position determination. In this way, the identified landmarks included in sparse map800may serve as navigational anchors from which an accurate position of the vehicle relative to a target trajectory may be determined. Because a certain amount of error may be acceptable in position location, an identified landmark need not always be available to an autonomous vehicle. Rather, suitable navigation may be possible even based on landmark spacings, as noted above, of 10 meters, 20 meters, 50 meters, 100 meters, 500 meters, 1 kilometer, 2 kilometers, or more. In some embodiments, a density of 1 identified landmark every 1 km of road may be sufficient to maintain a longitudinal position determination accuracy within 1 m. Thus, not every potential landmark appearing along a road segment need be stored in sparse map800. Moreover, in some embodiments, lane markings may be used for localization of the vehicle during landmark spacings. By using lane markings during landmark spacings, the accumulation of errors during navigation by dead reckoning may be minimized. In addition to target trajectories and identified landmarks, sparse map800may include information relating to various other road features. For example,FIG.9Aillustrates a representation of curves along a particular road segment that may be stored in sparse map800. In some embodiments, a single lane of a road may be modeled by a three-dimensional polynomial description of left and right sides of the road. Such polynomials representing left and right sides of a single lane are shown inFIG.9A. Regardless of how many lanes a road may have, the road may be represented using polynomials in a way similar to that illustrated inFIG.9A. For example, left and right sides of a multi-lane road may be represented by polynomials similar to those shown inFIG.9A, and intermediate lane markings included on a multi-lane road (e.g., dashed markings representing lane boundaries, solid yellow lines representing boundaries between lanes traveling in different directions, etc.) may also be represented using polynomials such as those shown inFIG.9A. As shown inFIG.9A, a lane900may be represented using polynomials (e.g., a first order, second order, third order, or any suitable order polynomials). For illustration, lane900is shown as a two-dimensional lane and the polynomials are shown as two-dimensional polynomials. As depicted inFIG.9A, lane900includes a left side910and a right side920. In some embodiments, more than one polynomial may be used to represent a location of each side of the road or lane boundary. For example, each of left side910and right side920may be represented by a plurality of polynomials of any suitable length. In some cases, the polynomials may have a length of about 100 m, although other lengths greater than or less than 100 m may also be used. Additionally, the polynomials can overlap with one another in order to facilitate seamless transitions in navigating based on subsequently encountered polynomials as a host vehicle travels along a roadway. For example, each of left side910and right side920may be represented by a plurality of third order polynomials separated into segments of about 100 meters in length (an example of the first predetermined range), and overlapping each other by about 50 meters. The polynomials representing the left side910and the right side920may or may not have the same order. For example, in some embodiments, some polynomials may be second order polynomials, some may be third order polynomials, and some may be fourth order polynomials. In the example shown inFIG.9A, left side910of lane900is represented by two groups of third order polynomials. The first group includes polynomial segments911,912, and913. The second group includes polynomial segments914,915, and916. The two groups, while substantially parallel to each other, follow the locations of their respective sides of the road. Polynomial segments911,912,913,914,915, and916have a length of about 100 meters and overlap adjacent segments in the series by about meters. As noted previously, however, polynomials of different lengths and different overlap amounts may also be used. For example, the polynomials may have lengths of 500 m, 1 km, or more, and the overlap amount may vary from 0 to 50 m, 50 m to 100 m, or greater than 100 m. Additionally, whileFIG.9Ais shown as representing polynomials extending in 2D space (e.g., on the surface of the paper), it is to be understood that these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature. In the example shown inFIG.9A, right side920of lane900is further represented by a first group having polynomial segments921,922, and923and a second group having polynomial segments924,925, and926. Returning to the target trajectories of sparse map800,FIG.9Bshows a three-dimensional polynomial representing a target trajectory for a vehicle traveling along a particular road segment. The target trajectory represents not only the X-Y path that a host vehicle should travel along a particular road segment, but also the elevation change that the host vehicle will experience when traveling along the road segment. Thus, each target trajectory in sparse map800may be represented by one or more three-dimensional polynomials, like the three-dimensional polynomial950shown inFIG.9B. Sparse map800may include a plurality of trajectories (e.g., millions or billions or more to represent trajectories of vehicles along various road segments along roadways throughout the world). In some embodiments, each target trajectory may correspond to a spline connecting three-dimensional polynomial segments. Regarding the data footprint of polynomial curves stored in sparse map800, in some embodiments, each third degree polynomial may be represented by four parameters, each requiring four bytes of data. Suitable representations may be obtained with third degree polynomials requiring about 192 bytes of data for every 100 m. This may translate to approximately 200 kB per hour in data usage/transfer requirements for a host vehicle traveling approximately 100 km/hr. Sparse map800may describe the lanes network using a combination of geometry descriptors and meta-data. The geometry may be described by polynomials or splines as described above. The meta-data may describe the number of lanes, special characteristics (such as a car pool lane), and possibly other sparse labels. The total footprint of such indicators may be negligible. Accordingly, a sparse map according to embodiments of the present disclosure may include at least one line representation of a road surface feature extending along the road segment, each line representation representing a path along the road segment substantially corresponding with the road surface feature. In some embodiments, as discussed above, the at least one line representation of the road surface feature may include a spline, a polynomial representation, or a curve. Furthermore, in some embodiments, the road surface feature may include at least one of a road edge or a lane marking. Moreover, as discussed below with respect to “crowdsourcing,” the road surface feature may be identified through image analysis of a plurality of images acquired as one or more vehicles traverse the road segment. As previously noted, sparse map800may include a plurality of predetermined landmarks associated with a road segment. Rather than storing actual images of the landmarks and relying, for example, on image recognition analysis based on captured images and stored images, each landmark in sparse map800may be represented and recognized using less data than a stored, actual image would require. Data representing landmarks may still include sufficient information for describing or identifying the landmarks along a road. Storing data describing characteristics of landmarks, rather than the actual images of landmarks, may reduce the size of sparse map800. FIG.10illustrates examples of types of landmarks that may be represented in sparse map800. The landmarks may include any visible and identifiable objects along a road segment. The landmarks may be selected such that they are fixed and do not change often with respect to their locations and/or content. The landmarks included in sparse map800may be useful in determining a location of vehicle200with respect to a target trajectory as the vehicle traverses a particular road segment. Examples of landmarks may include traffic signs, directional signs, general signs (e.g., rectangular signs), roadside fixtures (e.g., lampposts, reflectors, etc.), and any other suitable category. In some embodiments, lane marks on the road, may also be included as landmarks in sparse map800. Examples of landmarks shown inFIG.10include traffic signs, directional signs, roadside fixtures, and general signs. Traffic signs may include, for example, speed limit signs (e.g., speed limit sign1000), yield signs (e.g., yield sign1005), route number signs (e.g., route number sign1010), traffic light signs (e.g., traffic light sign1015), stop signs (e.g., stop sign1020). Directional signs may include a sign that includes one or more arrows indicating one or more directions to different places. For example, directional signs may include a highway sign1025having arrows for directing vehicles to different roads or places, an exit sign1030having an arrow directing vehicles off a road, etc. Accordingly, at least one of the plurality of landmarks may include a road sign. General signs may be unrelated to traffic. For example, general signs may include billboards used for advertisement, or a welcome board adjacent a border between two countries, states, counties, cities, or towns.FIG.10shows a general sign1040(“Joe's Restaurant”). Although general sign1040may have a rectangular shape, as shown inFIG.10, general sign1040may have other shapes, such as square, circle, triangle, etc. Landmarks may also include roadside fixtures. Roadside fixtures may be objects that are not signs, and may not be related to traffic or directions. For example, roadside fixtures may include lampposts (e.g., lamppost1035), power line posts, traffic light posts, etc. Landmarks may also include beacons that may be specifically designed for usage in an autonomous vehicle navigation system. For example, such beacons may include stand-alone structures placed at predetermined intervals to aid in navigating a host vehicle. Such beacons may also include visual/graphical information added to existing road signs (e.g., icons, emblems, bar codes, etc.) that may be identified or recognized by a vehicle traveling along a road segment. Such beacons may also include electronic components. In such embodiments, electronic beacons (e.g., RFID tags, etc.) may be used to transmit non-visual information to a host vehicle. Such information may include, for example, landmark identification and/or landmark location information that a host vehicle may use in determining its position along a target trajectory. In some embodiments, the landmarks included in sparse map800may be represented by a data object of a predetermined size. The data representing a landmark may include any suitable parameters for identifying a particular landmark. For example, in some embodiments, landmarks stored in sparse map800may include parameters such as a physical size of the landmark (e.g., to support estimation of distance to the landmark based on a known size/scale), a distance to a previous landmark, lateral offset, height, a type code (e.g., a landmark type—what type of directional sign, traffic sign, etc.), a GPS coordinate (e.g., to support global localization), and any other suitable parameters. Each parameter may be associated with a data size. For example, a landmark size may be stored using 8 bytes of data. A distance to a previous landmark, a lateral offset, and height may be specified using 12 bytes of data. A type code associated with a landmark such as a directional sign or a traffic sign may require about 2 bytes of data. For general signs, an image signature enabling identification of the general sign may be stored using 50 bytes of data storage. The landmark GPS position may be associated with 16 bytes of data storage. These data sizes for each parameter are examples only, and other data sizes may also be used. Representing landmarks in sparse map800in this manner may offer a lean solution for efficiently representing landmarks in the database. In some embodiments, objects may be referred to as standard semantic objects or non-standard semantic objects. A standard semantic object may include any class of object for which there's a standardized set of characteristics (e.g., speed limit signs, warning signs, directional signs, traffic lights, etc. having known dimensions or other characteristics). A non-standard semantic object may include any object that is not associated with a standardized set of characteristics (e.g., general advertising signs, signs identifying business establishments, potholes, trees, etc. that may have variable dimensions). Each non-standard semantic object may be represented with 38 bytes of data (e.g., 8 bytes for size; 12 bytes for distance to previous landmark, lateral offset, and height; 2 bytes for a type code; and 16 bytes for position coordinates). Standard semantic objects may be represented using even less data, as size information may not be needed by the mapping server to fully represent the object in the sparse map. Sparse map800may use a tag system to represent landmark types. In some cases, each traffic sign or directional sign may be associated with its own tag, which may be stored in the database as part of the landmark identification. For example, the database may include on the order of 1000 different tags to represent various traffic signs and on the order of about 10000 different tags to represent directional signs. Of course, any suitable number of tags may be used, and additional tags may be created as needed. General purpose signs may be represented in some embodiments using less than about 100 bytes (e.g., about 86 bytes including 8 bytes for size; 12 bytes for distance to previous landmark, lateral offset, and height; 50 bytes for an image signature; and 16 bytes for GPS coordinates). Thus, for semantic road signs not requiring an image signature, the data density impact to sparse map800, even at relatively high landmark densities of about 1 per 50 m, may be on the order of about 760 bytes per kilometer (e.g., 20 landmarks per km×38 bytes per landmark=760 bytes). Even for general purpose signs including an image signature component, the data density impact is about 1.72 kB per km (e.g., 20 landmarks per km×86 bytes per landmark=1,720 bytes). For semantic road signs, this equates to about 76 kB per hour of data usage for a vehicle traveling 100 km/hr. For general purpose signs, this equates to about 170 kB per hour for a vehicle traveling 100 km/hr. It should be noted that in some environments (e.g., urban environments) there may be a much higher density of detected objects available for inclusion in the sparse map (perhaps more than one per meter). In some embodiments, a generally rectangular object, such as a rectangular sign, may be represented in sparse map800by no more than 100 bytes of data. The representation of the generally rectangular object (e.g., general sign1040) in sparse map800may include a condensed image signature or image hash (e.g., condensed image signature1045) associated with the generally rectangular object. This condensed image signature/image hash may be determined using any suitable image hashing algorithm and may be used, for example, to aid in identification of a general purpose sign, for example, as a recognized landmark. Such a condensed image signature (e.g., image information derived from actual image data representing an object) may avoid a need for storage of an actual image of an object or a need for comparative image analysis performed on actual images in order to recognize landmarks. Referring toFIG.10, sparse map800may include or store a condensed image signature1045associated with a general sign1040, rather than an actual image of general sign1040. For example, after an image capture device (e.g., image capture device122,124, or126) captures an image of general sign1040, a processor (e.g., image processor190or any other processor that can process images either aboard or remotely located relative to a host vehicle) may perform an image analysis to extract/create condensed image signature1045that includes a unique signature or pattern associated with general sign1040. In one embodiment, condensed image signature1045may include a shape, color pattern, a brightness pattern, or any other feature that may be extracted from the image of general sign1040for describing general sign1040. For example, inFIG.10, the circles, triangles, and stars shown in condensed image signature1045may represent areas of different colors. The pattern represented by the circles, triangles, and stars may be stored in sparse map800, e.g., within the 50 bytes designated to include an image signature. Notably, the circles, triangles, and stars are not necessarily meant to indicate that such shapes are stored as part of the image signature. Rather, these shapes are meant to conceptually represent recognizable areas having discernible color differences, textual areas, graphical shapes, or other variations in characteristics that may be associated with a general purpose sign. Such condensed image signatures can be used to identify a landmark in the form of a general sign. For example, the condensed image signature can be used to perform a same-not-same analysis based on a comparison of a stored condensed image signature with image data captured, for example, using a camera onboard an autonomous vehicle. Accordingly, the plurality of landmarks may be identified through image analysis of the plurality of images acquired as one or more vehicles traverse the road segment. As explained below with respect to “crowdsourcing,” in some embodiments, the image analysis to identify the plurality of landmarks may include accepting potential landmarks when a ratio of images in which the landmark does appear to images in which the landmark does not appear exceeds a threshold. Furthermore, in some embodiments, the image analysis to identify the plurality of landmarks may include rejecting potential landmarks when a ratio of images in which the landmark does not appear to images in which the landmark does appear exceeds a threshold. Returning to the target trajectories a host vehicle may use to navigate a particular road segment,FIG.11Ashows polynomial representations trajectories capturing during a process of building or maintaining sparse map800. A polynomial representation of a target trajectory included in sparse map800may be determined based on two or more reconstructed trajectories of prior traversals of vehicles along the same road segment. In some embodiments, the polynomial representation of the target trajectory included in sparse map800may be an aggregation of two or more reconstructed trajectories of prior traversals of vehicles along the same road segment. In some embodiments, the polynomial representation of the target trajectory included in sparse map800may be an average of the two or more reconstructed trajectories of prior traversals of vehicles along the same road segment. Other mathematical operations may also be used to construct a target trajectory along a road path based on reconstructed trajectories collected from vehicles traversing along a road segment. As shown inFIG.11A, a road segment1100may be travelled by a number of vehicles200at different times. Each vehicle200may collect data relating to a path that the vehicle took along the road segment. The path traveled by a particular vehicle may be determined based on camera data, accelerometer information, speed sensor information, and/or GPS information, among other potential sources. Such data may be used to reconstruct trajectories of vehicles traveling along the road segment, and based on these reconstructed trajectories, a target trajectory (or multiple target trajectories) may be determined for the particular road segment. Such target trajectories may represent a preferred path of a host vehicle (e.g., guided by an autonomous navigation system) as the vehicle travels along the road segment. In the example shown inFIG.11A, a first reconstructed trajectory1101may be determined based on data received from a first vehicle traversing road segment1100at a first time period (e.g., day 1), a second reconstructed trajectory1102may be obtained from a second vehicle traversing road segment1100at a second time period (e.g., day 2), and a third reconstructed trajectory1103may be obtained from a third vehicle traversing road segment1100at a third time period (e.g., day 3). Each trajectory1101,1102, and1103may be represented by a polynomial, such as a three-dimensional polynomial. It should be noted that in some embodiments, any of the reconstructed trajectories may be assembled onboard the vehicles traversing road segment1100. Additionally, or alternatively, such reconstructed trajectories may be determined on a server side based on information received from vehicles traversing road segment1100. For example, in some embodiments, vehicles200may transmit data to one or more servers relating to their motion along road segment1100(e.g., steering angle, heading, time, position, speed, sensed road geometry, and/or sensed landmarks, among things). The server may reconstruct trajectories for vehicles200based on the received data. The server may also generate a target trajectory for guiding navigation of autonomous vehicle that will travel along the same road segment1100at a later time based on the first, second, and third trajectories1101,1102, and1103. While a target trajectory may be associated with a single prior traversal of a road segment, in some embodiments, each target trajectory included in sparse map800may be determined based on two or more reconstructed trajectories of vehicles traversing the same road segment. InFIG.11A, the target trajectory is represented by1110. In some embodiments, the target trajectory1110may be generated based on an average of the first, second, and third trajectories1101,1102, and1103. In some embodiments, the target trajectory1110included in sparse map800may be an aggregation (e.g., a weighted combination) of two or more reconstructed trajectories. At the mapping server, the server may receive actual trajectories for a particular road segment from multiple harvesting vehicles traversing the road segment. To generate a target trajectory for each valid path along the road segment (e.g., each lane, each drive direction, each path through a junction, etc.), the received actual trajectories may be aligned. The alignment process may include using detected objects/features identified along the road segment along with harvested positions of those detected objects/features to correlate the actual, harvested trajectories with one another. Once aligned, an average or “best fit” target trajectory for each available lane, etc. may be determined based on the aggregated, correlated/aligned actual trajectories. FIGS.11B and11Cfurther illustrate the concept of target trajectories associated with road segments present within a geographic region1111. As shown inFIG.11B, a first road segment1120within geographic region1111may include a multilane road, which includes two lanes1122designated for vehicle travel in a first direction and two additional lanes1124designated for vehicle travel in a second direction opposite to the first direction. Lanes1122and lanes1124may be separated by a double yellow line1123. Geographic region1111may also include a branching road segment1130that intersects with road segment1120. Road segment1130may include a two-lane road, each lane being designated for a different direction of travel. Geographic region1111may also include other road features, such as a stop line1132, a stop sign1134, a speed limit sign1136, and a hazard sign1138. As shown inFIG.11C, sparse map800may include a local map1140including a road model for assisting with autonomous navigation of vehicles within geographic region1111. For example, local map1140may include target trajectories for one or more lanes associated with road segments1120and/or1130within geographic region1111. For example, local map1140may include target trajectories1141and/or1142that an autonomous vehicle may access or rely upon when traversing lanes1122. Similarly, local map1140may include target trajectories1143and/or1144that an autonomous vehicle may access or rely upon when traversing lanes1124. Further, local map1140may include target trajectories1145and/or1146that an autonomous vehicle may access or rely upon when traversing road segment1130. Target trajectory1147represents a preferred path an autonomous vehicle should follow when transitioning from lanes1120(and specifically, relative to target trajectory1141associated with a right-most lane of lanes1120) to road segment1130(and specifically, relative to a target trajectory1145associated with a first side of road segment1130. Similarly, target trajectory1148represents a preferred path an autonomous vehicle should follow when transitioning from road segment1130(and specifically, relative to target trajectory1146) to a portion of road segment1124(and specifically, as shown, relative to a target trajectory1143associated with a left lane of lanes1124. Sparse map800may also include representations of other road-related features associated with geographic region1111. For example, sparse map800may also include representations of one or more landmarks identified in geographic region1111. Such landmarks may include a first landmark1150associated with stop line1132, a second landmark1152associated with stop sign1134, a third landmark associated with speed limit sign1154, and a fourth landmark1156associated with hazard sign1138. Such landmarks may be used, for example, to assist an autonomous vehicle in determining its current location relative to any of the shown target trajectories, such that the vehicle may adjust its heading to match a direction of the target trajectory at the determined location. In some embodiments, sparse map800may also include road signature profiles. Such road signature profiles may be associated with any discernible/measurable variation in at least one parameter associated with a road. For example, in some cases, such profiles may be associated with variations in road surface information such as variations in surface roughness of a particular road segment, variations in road width over a particular road segment, variations in distances between dashed lines painted along a particular road segment, variations in road curvature along a particular road segment, etc.FIG.11Dshows an example of a road signature profile1160. While profile1160may represent any of the parameters mentioned above, or others, in one example, profile1160may represent a measure of road surface roughness, as obtained, for example, by monitoring one or more sensors providing outputs indicative of an amount of suspension displacement as a vehicle travels a particular road segment. Alternatively or concurrently, profile1160may represent variation in road width, as determined based on image data obtained via a camera onboard a vehicle traveling a particular road segment. Such profiles may be useful, for example, in determining a particular location of an autonomous vehicle relative to a particular target trajectory. That is, as it traverses a road segment, an autonomous vehicle may measure a profile associated with one or more parameters associated with the road segment. If the measured profile can be correlated/matched with a predetermined profile that plots the parameter variation with respect to position along the road segment, then the measured and predetermined profiles may be used (e.g., by overlaying corresponding sections of the measured and predetermined profiles) in order to determine a current position along the road segment and, therefore, a current position relative to a target trajectory for the road segment. In some embodiments, sparse map800may include different trajectories based on different characteristics associated with a user of autonomous vehicles, environmental conditions, and/or other parameters relating to driving. For example, in some embodiments, different trajectories may be generated based on different user preferences and/or profiles. Sparse map800including such different trajectories may be provided to different autonomous vehicles of different users. For example, some users may prefer to avoid toll roads, while others may prefer to take the shortest or fastest routes, regardless of whether there is a toll road on the route. The disclosed systems may generate different sparse maps with different trajectories based on such different user preferences or profiles. As another example, some users may prefer to travel in a fast moving lane, while others may prefer to maintain a position in the central lane at all times. Different trajectories may be generated and included in sparse map800based on different environmental conditions, such as day and night, snow, rain, fog, etc. Autonomous vehicles driving under different environmental conditions may be provided with sparse map800generated based on such different environmental conditions. In some embodiments, cameras provided on autonomous vehicles may detect the environmental conditions, and may provide such information back to a server that generates and provides sparse maps. For example, the server may generate or update an already generated sparse map800to include trajectories that may be more suitable or safer for autonomous driving under the detected environmental conditions. The update of sparse map800based on environmental conditions may be performed dynamically as the autonomous vehicles are traveling along roads. Other different parameters relating to driving may also be used as a basis for generating and providing different sparse maps to different autonomous vehicles. For example, when an autonomous vehicle is traveling at a high speed, turns may be tighter. Trajectories associated with specific lanes, rather than roads, may be included in sparse map800such that the autonomous vehicle may maintain within a specific lane as the vehicle follows a specific trajectory. When an image captured by a camera onboard the autonomous vehicle indicates that the vehicle has drifted outside of the lane (e.g., crossed the lane mark), an action may be triggered within the vehicle to bring the vehicle back to the designated lane according to the specific trajectory. Crowdsourcing a Sparse Map The disclosed sparse maps may be efficiently (and passively) generated through the power of crowdsourcing. For example, any private or commercial vehicle equipped with a camera (e.g., a simple, low resolution camera regularly included as OEM equipment on today's vehicles) and an appropriate image analysis processor can serve as a harvesting vehicle. No special equipment (e.g., high definition imaging and/or positioning systems) are required. As a result of the disclosed crowdsourcing technique, the generated sparse maps may be extremely accurate and may include extremely refined position information (enabling navigation error limits of 10 cm or less) without requiring any specialized imaging or sensing equipment as input to the map generation process. Crowdsourcing also enables much more rapid (and inexpensive) updates to the generated maps, as new drive information is continuously available to the mapping server system from any roads traversed by private or commercial vehicles minimally equipped to also serve as harvesting vehicles. There is no need for designated vehicles equipped with high-definition imaging and mapping sensors. Therefore, the expense associated with building such specialized vehicles can be avoided. Further, updates to the presently disclosed sparse maps may be made much more rapidly than systems that rely upon dedicated, specialized mapping vehicles (which by virtue of their expense and special equipment are typically limited to a fleet of specialized vehicles of far lower numbers than the number of private or commercial vehicles already available for performing the disclosed harvesting techniques). The disclosed sparse maps generated through crowdsourcing may be extremely accurate because they may be generated based on many inputs from multiple (10s, hundreds, millions, etc.) of harvesting vehicles that have collected drive information along a particular road segment. For example, every harvesting vehicle that drives along a particular road segment may record its actual trajectory and may determine position information relative to detected objects/features along the road segment. This information is passed along from multiple harvesting vehicles to a server. The actual trajectories are aggregated to generate a refined, target trajectory for each valid drive path along the road segment. Additionally, the position information collected from the multiple harvesting vehicles for each of the detected objects/features along the road segment (semantic or non-semantic) can also be aggregated. As a result, the mapped position of each detected object/feature may constitute an average of hundreds, thousands, or millions of individually determined positions for each detected object/feature. Such a technique may yield extremely accurate mapped positions for the detected objects/features. In some embodiments, the disclosed systems and methods may generate a sparse map for autonomous vehicle navigation. For example, disclosed systems and methods may use crowdsourced data for generation of a sparse map that one or more autonomous vehicles may use to navigate along a system of roads. As used herein, “crowdsourcing” means that data are received from various vehicles (e.g., autonomous vehicles) travelling on a road segment at different times, and such data are used to generate and/or update the road model, including sparse map tiles. The model or any of its sparse map tiles may, in turn, be transmitted to the vehicles or other vehicles later travelling along the road segment for assisting autonomous vehicle navigation. The road model may include a plurality of target trajectories representing preferred trajectories that autonomous vehicles should follow as they traverse a road segment. The target trajectories may be the same as a reconstructed actual trajectory collected from a vehicle traversing a road segment, which may be transmitted from the vehicle to a server. In some embodiments, the target trajectories may be different from actual trajectories that one or more vehicles previously took when traversing a road segment. The target trajectories may be generated based on actual trajectories (e.g., through averaging or any other suitable operation). The vehicle trajectory data that a vehicle may upload to a server may correspond with the actual reconstructed trajectory for the vehicle or may correspond to a recommended trajectory, which may be based on or related to the actual reconstructed trajectory of the vehicle, but may differ from the actual reconstructed trajectory. For example, vehicles may modify their actual, reconstructed trajectories and submit (e.g., recommend) to the server the modified actual trajectories. The road model may use the recommended, modified trajectories as target trajectories for autonomous navigation of other vehicles. In addition to trajectory information, other information for potential use in building a sparse data map800may include information relating to potential landmark candidates. For example, through crowd sourcing of information, the disclosed systems and methods may identify potential landmarks in an environment and refine landmark positions. The landmarks may be used by a navigation system of autonomous vehicles to determine and/or adjust the position of the vehicle along the target trajectories. The reconstructed trajectories that a vehicle may generate as the vehicle travels along a road may be obtained by any suitable method. In some embodiments, the reconstructed trajectories may be developed by stitching together segments of motion for the vehicle, using, e.g., ego motion estimation (e.g., three dimensional translation and three dimensional rotation of the camera, and hence the body of the vehicle). The rotation and translation estimation may be determined based on analysis of images captured by one or more image capture devices along with information from other sensors or devices, such as inertial sensors and speed sensors. For example, the inertial sensors may include an accelerometer or other suitable sensors configured to measure changes in translation and/or rotation of the vehicle body. The vehicle may include a speed sensor that measures a speed of the vehicle. In some embodiments, the ego motion of the camera (and hence the vehicle body) may be estimated based on an optical flow analysis of the captured images. An optical flow analysis of a sequence of images identifies movement of pixels from the sequence of images, and based on the identified movement, determines motions of the vehicle. The ego motion may be integrated over time and along the road segment to reconstruct a trajectory associated with the road segment that the vehicle has followed. Data (e.g., reconstructed trajectories) collected by multiple vehicles in multiple drives along a road segment at different times may be used to construct the road model (e.g., including the target trajectories, etc.) included in sparse data map800. Data collected by multiple vehicles in multiple drives along a road segment at different times may also be averaged to increase an accuracy of the model. In some embodiments, data regarding the road geometry and/or landmarks may be received from multiple vehicles that travel through the common road segment at different times. Such data received from different vehicles may be combined to generate the road model and/or to update the road model. The geometry of a reconstructed trajectory (and also a target trajectory) along a road segment may be represented by a curve in three dimensional space, which may be a spline connecting three dimensional polynomials. The reconstructed trajectory curve may be determined from analysis of a video stream or a plurality of images captured by a camera installed on the vehicle. In some embodiments, a location is identified in each frame or image that is a few meters ahead of the current position of the vehicle. This location is where the vehicle is expected to travel to in a predetermined time period. This operation may be repeated frame by frame, and at the same time, the vehicle may compute the camera's ego motion (rotation and translation). At each frame or image, a short range model for the desired path is generated by the vehicle in a reference frame that is attached to the camera. The short range models may be stitched together to obtain a three dimensional model of the road in some coordinate frame, which may be an arbitrary or predetermined coordinate frame. The three dimensional model of the road may then be fitted by a spline, which may include or connect one or more polynomials of suitable orders. To conclude the short range road model at each frame, one or more detection modules may be used. For example, a bottom-up lane detection module may be used. The bottom-up lane detection module may be useful when lane marks are drawn on the road. This module may look for edges in the image and assembles them together to form the lane marks. A second module may be used together with the bottom-up lane detection module. The second module is an end-to-end deep neural network, which may be trained to predict the correct short range path from an input image. In both modules, the road model may be detected in the image coordinate frame and transformed to a three dimensional space that may be virtually attached to the camera. Although the reconstructed trajectory modeling method may introduce an accumulation of errors due to the integration of ego motion over a long period of time, which may include a noise component, such errors may be inconsequential as the generated model may provide sufficient accuracy for navigation over a local scale. In addition, it is possible to cancel the integrated error by using external sources of information, such as satellite images or geodetic measurements. For example, the disclosed systems and methods may use a GNSS receiver to cancel accumulated errors. However, the GNSS positioning signals may not be always available and accurate. The disclosed systems and methods may enable a steering application that depends weakly on the availability and accuracy of GNSS positioning. In such systems, the usage of the GNSS signals may be limited. For example, in some embodiments, the disclosed systems may use the GNSS signals for database indexing purposes only. In some embodiments, the range scale (e.g., local scale) that may be relevant for an autonomous vehicle navigation steering application may be on the order of 50 meters, 100 meters, 200 meters, 300 meters, etc. Such distances may be used, as the geometrical road model is mainly used for two purposes: planning the trajectory ahead and localizing the vehicle on the road model. In some embodiments, the planning task may use the model over a typical range of 40 meters ahead (or any other suitable distance ahead, such as 20 meters, 30 meters, 50 meters), when the control algorithm steers the vehicle according to a target point located 1.3 seconds ahead (or any other time such as 1.5 seconds, 1.7 seconds, 2 seconds, etc.). The localization task uses the road model over a typical range of 60 meters behind the car (or any other suitable distances, such as 50 meters, 100 meters, 150 meters, etc.), according to a method called “tail alignment” described in more detail in another section. The disclosed systems and methods may generate a geometrical model that has sufficient accuracy over particular range, such as 100 meters, such that a planned trajectory will not deviate by more than, for example, 30 cm from the lane center. As explained above, a three dimensional road model may be constructed from detecting short range sections and stitching them together. The stitching may be enabled by computing a six degree ego motion model, using the videos and/or images captured by the camera, data from the inertial sensors that reflect the motions of the vehicle, and the host vehicle velocity signal. The accumulated error may be small enough over some local range scale, such as of the order of 100 meters. All this may be completed in a single drive over a particular road segment. In some embodiments, multiple drives may be used to average the resulted model, and to increase its accuracy further. The same car may travel the same route multiple times, or multiple cars may send their collected model data to a central server. In any case, a matching procedure may be performed to identify overlapping models and to enable averaging in order to generate target trajectories. The constructed model (e.g., including the target trajectories) may be used for steering once a convergence criterion is met. Subsequent drives may be used for further model improvements and in order to accommodate infrastructure changes. Sharing of driving experience (such as sensed data) between multiple cars becomes feasible if they are connected to a central server. Each vehicle client may store a partial copy of a universal road model, which may be relevant for its current position. A bidirectional update procedure between the vehicles and the server may be performed by the vehicles and the server. The small footprint concept discussed above enables the disclosed systems and methods to perform the bidirectional updates using a very small bandwidth. Information relating to potential landmarks may also be determined and forwarded to a central server. For example, the disclosed systems and methods may determine one or more physical properties of a potential landmark based on one or more images that include the landmark. The physical properties may include a physical size (e.g., height, width) of the landmark, a distance from a vehicle to a landmark, a distance between the landmark to a previous landmark, the lateral position of the landmark (e.g., the position of the landmark relative to the lane of travel), the GPS coordinates of the landmark, a type of landmark, identification of text on the landmark, etc. For example, a vehicle may analyze one or more images captured by a camera to detect a potential landmark, such as a speed limit sign. The vehicle may determine a distance from the vehicle to the landmark or a position associated with the landmark (e.g., any semantic or non-semantic object or feature along a road segment) based on the analysis of the one or more images. In some embodiments, the distance may be determined based on analysis of images of the landmark using a suitable image analysis method, such as a scaling method and/or an optical flow method. As previously noted, a position of the object/feature may include a 2D image position (e.g., an X-Y pixel position in one or more captured images) of one or more points associated with the object/feature or may include a 3D real-world position of one or more points (e.g., determined through structure in motion/optical flow techniques, LIDAR or RADAR information, etc.). In some embodiments, the disclosed systems and methods may be configured to determine a type or classification of a potential landmark. In case the vehicle determines that a certain potential landmark corresponds to a predetermined type or classification stored in a sparse map, it may be sufficient for the vehicle to communicate to the server an indication of the type or classification of the landmark, along with its location. The server may store such indications. At a later time, during navigation, a navigating vehicle may capture an image that includes a representation of the landmark, process the image (e.g., using a classifier), and compare the result landmark in order to confirm detection of the mapped landmark and to use the mapped landmark in localizing the navigating vehicle relative to the sparse map. In some embodiments, multiple autonomous vehicles travelling on a road segment may communicate with a server. The vehicles (or clients) may generate a curve describing its drive (e.g., through ego motion integration) in an arbitrary coordinate frame. The vehicles may detect landmarks and locate them in the same frame. The vehicles may upload the curve and the landmarks to the server. The server may collect data from vehicles over multiple drives, and generate a unified road model. For example, as discussed below with respect toFIG.19, the server may generate a sparse map having the unified road model using the uploaded curves and landmarks. The server may also distribute the model to clients (e.g., vehicles). For example, the server may distribute the sparse map to one or more vehicles. The server may continuously or periodically update the model when receiving new data from the vehicles. For example, the server may process the new data to evaluate whether the data includes information that should trigger an updated, or creation of new data on the server. The server may distribute the updated model or the updates to the vehicles for providing autonomous vehicle navigation. The server may use one or more criteria for determining whether new data received from the vehicles should trigger an update to the model or trigger creation of new data. For example, when the new data indicates that a previously recognized landmark at a specific location no longer exists, or is replaced by another landmark, the server may determine that the new data should trigger an update to the model. As another example, when the new data indicates that a road segment has been closed, and when this has been corroborated by data received from other vehicles, the server may determine that the new data should trigger an update to the model. The server may distribute the updated model (or the updated portion of the model) to one or more vehicles that are traveling on the road segment, with which the updates to the model are associated. The server may also distribute the updated model to vehicles that are about to travel on the road segment, or vehicles whose planned trip includes the road segment, with which the updates to the model are associated. For example, while an autonomous vehicle is traveling along another road segment before reaching the road segment with which an update is associated, the server may distribute the updates or updated model to the autonomous vehicle before the vehicle reaches the road segment. In some embodiments, the remote server may collect trajectories and landmarks from multiple clients (e.g., vehicles that travel along a common road segment). The server may match curves using landmarks and create an average road model based on the trajectories collected from the multiple vehicles. The server may also compute a graph of roads and the most probable path at each node or conjunction of the road segment. For example, the remote server may align the trajectories to generate a crowdsourced sparse map from the collected trajectories. The server may average landmark properties received from multiple vehicles that travelled along the common road segment, such as the distances between one landmark to another (e.g., a previous one along the road segment) as measured by multiple vehicles, to determine an arc-length parameter and support localization along the path and speed calibration for each client vehicle. The server may average the physical dimensions of a landmark measured by multiple vehicles travelled along the common road segment and recognized the same landmark. The averaged physical dimensions may be used to support distance estimation, such as the distance from the vehicle to the landmark. The server may average lateral positions of a landmark (e.g., position from the lane in which vehicles are travelling in to the landmark) measured by multiple vehicles travelled along the common road segment and recognized the same landmark. The averaged lateral potion may be used to support lane assignment. The server may average the GPS coordinates of the landmark measured by multiple vehicles travelled along the same road segment and recognized the same landmark. The averaged GPS coordinates of the landmark may be used to support global localization or positioning of the landmark in the road model. In some embodiments, the server may identify model changes, such as constructions, detours, new signs, removal of signs, etc., based on data received from the vehicles. The server may continuously or periodically or instantaneously update the model upon receiving new data from the vehicles. The server may distribute updates to the model or the updated model to vehicles for providing autonomous navigation. For example, as discussed further below, the server may use crowdsourced data to filter out “ghost” landmarks detected by vehicles. In some embodiments, the server may analyze driver interventions during the autonomous driving. The server may analyze data received from the vehicle at the time and location where intervention occurs, and/or data received prior to the time the intervention occurred. The server may identify certain portions of the data that caused or are closely related to the intervention, for example, data indicating a temporary lane closure setup, data indicating a pedestrian in the road. The server may update the model based on the identified data. For example, the server may modify one or more trajectories stored in the model. FIG.12is a schematic illustration of a system that uses crowdsourcing to generate a sparse map (as well as distribute and navigate using a crowdsourced sparse map).FIG.12shows a road segment1200that includes one or more lanes. A plurality of vehicles1205,1210,1215,1220, and1225may travel on road segment1200at the same time or at different times (although shown as appearing on road segment1200at the same time inFIG.12). At least one of vehicles1205,1210,1215,1220, and1225may be an autonomous vehicle. For simplicity of the present example, all of the vehicles1205,1210,1215,1220, and1225are presumed to be autonomous vehicles. Each vehicle may be similar to vehicles disclosed in other embodiments (e.g., vehicle200), and may include components or devices included in or associated with vehicles disclosed in other embodiments. Each vehicle may be equipped with an image capture device or camera (e.g., image capture device122or camera122). Each vehicle may communicate with a remote server1230via one or more networks (e.g., over a cellular network and/or the Internet, etc.) through wireless communication paths1235, as indicated by the dashed lines. Each vehicle may transmit data to server1230and receive data from server1230. For example, server1230may collect data from multiple vehicles travelling on the road segment1200at different times, and may process the collected data to generate an autonomous vehicle road navigation model, or an update to the model. Server1230may transmit the autonomous vehicle road navigation model or the update to the model to the vehicles that transmitted data to server1230. Server1230may transmit the autonomous vehicle road navigation model or the update to the model to other vehicles that travel on road segment1200at later times. As vehicles1205,1210,1215,1220, and1225travel on road segment1200, navigation information collected (e.g., detected, sensed, or measured) by vehicles1205,1210,1215,1220, and1225may be transmitted to server1230. In some embodiments, the navigation information may be associated with the common road segment1200. The navigation information may include a trajectory associated with each of the vehicles1205,1210,1215,1220, and1225as each vehicle travels over road segment1200. In some embodiments, the trajectory may be reconstructed based on data sensed by various sensors and devices provided on vehicle1205. For example, the trajectory may be reconstructed based on at least one of accelerometer data, speed data, landmarks data, road geometry or profile data, vehicle positioning data, and ego motion data. In some embodiments, the trajectory may be reconstructed based on data from inertial sensors, such as accelerometer, and the velocity of vehicle1205sensed by a speed sensor. In addition, in some embodiments, the trajectory may be determined (e.g., by a processor onboard each of vehicles1205,1210,1215,1220, and1225) based on sensed ego motion of the camera, which may indicate three dimensional translation and/or three dimensional rotations (or rotational motions). The ego motion of the camera (and hence the vehicle body) may be determined from analysis of one or more images captured by the camera. In some embodiments, the trajectory of vehicle1205may be determined by a processor provided aboard vehicle1205and transmitted to server1230. In other embodiments, server1230may receive data sensed by the various sensors and devices provided in vehicle1205, and determine the trajectory based on the data received from vehicle1205. In some embodiments, the navigation information transmitted from vehicles1205,1210,1215,1220, and1225to server1230may include data regarding the road surface, the road geometry, or the road profile. The geometry of road segment1200may include lane structure and/or landmarks. The lane structure may include the total number of lanes of road segment1200, the type of lanes (e.g., one-way lane, two-way lane, driving lane, passing lane, etc.), markings on lanes, width of lanes, etc. In some embodiments, the navigation information may include a lane assignment, e.g., which lane of a plurality of lanes a vehicle is traveling in. For example, the lane assignment may be associated with a numerical value “3” indicating that the vehicle is traveling on the third lane from the left or right. As another example, the lane assignment may be associated with a text value “center lane” indicating the vehicle is traveling on the center lane. Server1230may store the navigation information on a non-transitory computer-readable medium, such as a hard drive, a compact disc, a tape, a memory, etc. Server1230may generate (e.g., through a processor included in server1230) at least a portion of an autonomous vehicle road navigation model for the common road segment1200based on the navigation information received from the plurality of vehicles1205,1210,1215,1220, and1225and may store the model as a portion of a sparse map. Server1230may determine a trajectory associated with each lane based on crowdsourced data (e.g., navigation information) received from multiple vehicles (e.g.,1205,1210,1215,1220, and1225) that travel on a lane of road segment at different times. Server1230may generate the autonomous vehicle road navigation model or a portion of the model (e.g., an updated portion) based on a plurality of trajectories determined based on the crowd sourced navigation data. Server1230may transmit the model or the updated portion of the model to one or more of autonomous vehicles1205,1210,1215,1220, and1225traveling on road segment1200or any other autonomous vehicles that travel on road segment at a later time for updating an existing autonomous vehicle road navigation model provided in a navigation system of the vehicles. The autonomous vehicle road navigation model may be used by the autonomous vehicles in autonomously navigating along the common road segment1200. As explained above, the autonomous vehicle road navigation model may be included in a sparse map (e.g., sparse map800depicted inFIG.8). Sparse map800may include sparse recording of data related to road geometry and/or landmarks along a road, which may provide sufficient information for guiding autonomous navigation of an autonomous vehicle, yet does not require excessive data storage. In some embodiments, the autonomous vehicle road navigation model may be stored separately from sparse map800, and may use map data from sparse map800when the model is executed for navigation. In some embodiments, the autonomous vehicle road navigation model may use map data included in sparse map800for determining target trajectories along road segment1200for guiding autonomous navigation of autonomous vehicles1205,1210,1215,1220, and1225or other vehicles that later travel along road segment1200. For example, when the autonomous vehicle road navigation model is executed by a processor included in a navigation system of vehicle1205, the model may cause the processor to compare the trajectories determined based on the navigation information received from vehicle1205with predetermined trajectories included in sparse map800to validate and/or correct the current traveling course of vehicle1205. In the autonomous vehicle road navigation model, the geometry of a road feature or target trajectory may be encoded by a curve in a three-dimensional space. In one embodiment, the curve may be a three dimensional spline including one or more connecting three dimensional polynomials. As one of skill in the art would understand, a spline may be a numerical function that is piece-wise defined by a series of polynomials for fitting data. A spline for fitting the three dimensional geometry data of the road may include a linear spline (first order), a quadratic spline (second order), a cubic spline (third order), or any other splines (other orders), or a combination thereof. The spline may include one or more three dimensional polynomials of different orders connecting (e.g., fitting) data points of the three dimensional geometry data of the road. In some embodiments, the autonomous vehicle road navigation model may include a three dimensional spline corresponding to a target trajectory along a common road segment (e.g., road segment1200) or a lane of the road segment1200. As explained above, the autonomous vehicle road navigation model included in the sparse map may include other information, such as identification of at least one landmark along road segment1200. The landmark may be visible within a field of view of a camera (e.g., camera122) installed on each of vehicles1205,1210,1215,1220, and1225. In some embodiments, camera122may capture an image of a landmark. A processor (e.g., processor180,190, or processing unit110) provided on vehicle1205may process the image of the landmark to extract identification information for the landmark. The landmark identification information, rather than an actual image of the landmark, may be stored in sparse map800. The landmark identification information may require much less storage space than an actual image. Other sensors or systems (e.g., GPS system) may also provide certain identification information of the landmark (e.g., position of landmark). The landmark may include at least one of a traffic sign, an arrow marking, a lane marking, a dashed lane marking, a traffic light, a stop line, a directional sign (e.g., a highway exit sign with an arrow indicating a direction, a highway sign with arrows pointing to different directions or places), a landmark beacon, or a lamppost. A landmark beacon refers to a device (e.g., an RFID device) installed along a road segment that transmits or reflects a signal to a receiver installed on a vehicle, such that when the vehicle passes by the device, the beacon received by the vehicle and the location of the device (e.g., determined from GPS location of the device) may be used as a landmark to be included in the autonomous vehicle road navigation model and/or the sparse map800. The identification of at least one landmark may include a position of the at least one landmark. The position of the landmark may be determined based on position measurements performed using sensor systems (e.g., Global Positioning Systems, inertial based positioning systems, landmark beacon, etc.) associated with the plurality of vehicles1205,1210,1215,1220, and1225. In some embodiments, the position of the landmark may be determined by averaging the position measurements detected, collected, or received by sensor systems on different vehicles1205,1210,1215,1220, and1225through multiple drives. For example, vehicles1205,1210,1215,1220, and1225may transmit position measurements data to server1230, which may average the position measurements and use the averaged position measurement as the position of the landmark. The position of the landmark may be continuously refined by measurements received from vehicles in subsequent drives. The identification of the landmark may include a size of the landmark. The processor provided on a vehicle (e.g.,1205) may estimate the physical size of the landmark based on the analysis of the images. Server1230may receive multiple estimates of the physical size of the same landmark from different vehicles over different drives. Server1230may average the different estimates to arrive at a physical size for the landmark, and store that landmark size in the road model. The physical size estimate may be used to further determine or estimate a distance from the vehicle to the landmark. The distance to the landmark may be estimated based on the current speed of the vehicle and a scale of expansion based on the position of the landmark appearing in the images relative to the focus of expansion of the camera. For example, the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image from the landmark at time t1 to the focus of expansion, and D is the change in distance for the landmark in the image from t1 to t2. dt represents the (t2−t1). For example, the distance to landmark may be estimated by Z=V*dt*R/D, where V is the speed of vehicle, R is the distance in the image between the landmark and the focus of expansion, dt is a time interval, and D is the image displacement of the landmark along the epipolar line. Other equations equivalent to the above equation, such as Z=V*ω/Δ0.), may be used for estimating the distance to the landmark. Here, V is the vehicle speed, w is an image length (like the object width), and Aw is the change of that image length in a unit of time. When the physical size of the landmark is known, the distance to the landmark may also be determined based on the following equation: Z=f*W/ω, where f is the focal length, W is the size of the landmark (e.g., height or width), ω is the number of pixels when the landmark leaves the image. From the above equation, a change in distance Z may be calculated using ΔZ=f*W*Δω/ω2+f*ΔW/ω, where ΔW decays to zero by averaging, and where Δω is the number of pixels representing a bounding box accuracy in the image. A value estimating the physical size of the landmark may be calculated by averaging multiple observations at the server side. The resulting error in distance estimation may be very small. There are two sources of error that may occur when using the formula above, namely ΔW and Δω. Their contribution to the distance error is given by ΔZ=f*W*Δω/ω2+f*ΔW/ω. However, ΔW decays to zero by averaging; hence ΔZ is determined by Δω (e.g., the inaccuracy of the bounding box in the image). For landmarks of unknown dimensions, the distance to the landmark may be estimated by tracking feature points on the landmark between successive frames. For example, certain features appearing on a speed limit sign may be tracked between two or more image frames. Based on these tracked features, a distance distribution per feature point may be generated. The distance estimate may be extracted from the distance distribution. For example, the most frequent distance appearing in the distance distribution may be used as the distance estimate. As another example, the average of the distance distribution may be used as the distance estimate. FIG.13illustrates an example autonomous vehicle road navigation model represented by a plurality of three dimensional splines1301,1302, and1303. The curves1301,1302, and1303shown inFIG.13are for illustration purpose only. Each spline may include one or more three dimensional polynomials connecting a plurality of data points1310. Each polynomial may be a first order polynomial, a second order polynomial, a third order polynomial, or a combination of any suitable polynomials having different orders. Each data point1310may be associated with the navigation information received from vehicles1205,1210,1215,1220, and1225. In some embodiments, each data point1310may be associated with data related to landmarks (e.g., size, location, and identification information of landmarks) and/or road signature profiles (e.g., road geometry, road roughness profile, road curvature profile, road width profile). In some embodiments, some data points1310may be associated with data related to landmarks, and others may be associated with data related to road signature profiles. FIG.14illustrates raw location data1410(e.g., GPS data) received from five separate drives. One drive may be separate from another drive if it was traversed by separate vehicles at the same time, by the same vehicle at separate times, or by separate vehicles at separate times. To account for errors in the location data1410and for differing locations of vehicles within the same lane (e.g., one vehicle may drive closer to the left of a lane than another), server1230may generate a map skeleton1420using one or more statistical techniques to determine whether variations in the raw location data1410represent actual divergences or statistical errors. Each path within skeleton1420may be linked back to the raw data1410that formed the path. For example, the path between A and B within skeleton1420is linked to raw data1410from drives2,3,4, and5but not from drive1. Skeleton1420may not be detailed enough to be used to navigate a vehicle (e.g., because it combines drives from multiple lanes on the same road unlike the splines described above) but may provide useful topological information and may be used to define intersections. FIG.15illustrates an example by which additional detail may be generated for a sparse map within a segment of a map skeleton (e.g., segment A to B within skeleton1420). As depicted in FIG. the data (e.g. ego-motion data, road markings data, and the like) may be shown as a function of position S (or S1or S2) along the drive. Server1230may identify landmarks for the sparse map by identifying unique matches between landmarks1501,1503, and1505of drive1510and landmarks1507and1509of drive1520. Such a matching algorithm may result in identification of landmarks1511,1513, and1515. One skilled in the art would recognize, however, that other matching algorithms may be used. For example, probability optimization may be used in lieu of or in combination with unique matching. Server1230may longitudinally align the drives to align the matched landmarks. For example, server1230may select one drive (e.g., drive1520) as a reference drive and then shift and/or elastically stretch the other drive(s) (e.g., drive1510) for alignment. FIG.16shows an example of aligned landmark data for use in a sparse map. In the example ofFIG.16, landmark1610comprises a road sign. The example ofFIG.16further depicts data from a plurality of drives1601,1603,1605,1607,1609,1611, and1613. In the example ofFIG.16, the data from drive1613consists of a “ghost” landmark, and the server1230may identify it as such because none of drives1601,1603,1605,1607,1609, and1611include an identification of a landmark in the vicinity of the identified landmark in drive1613. Accordingly, server1230may accept potential landmarks when a ratio of images in which the landmark does appear to images in which the landmark does not appear exceeds a threshold and/or may reject potential landmarks when a ratio of images in which the landmark does not appear to images in which the landmark does appear exceeds a threshold. FIG.17depicts a system1700for generating drive data, which may be used to crowdsource a sparse map. As depicted inFIG.17, system1700may include a camera1701and a locating device1703(e.g., a GPS locator). Camera1701and locating device1703may be mounted on a vehicle (e.g., one of vehicles1205,1210,1215,1220, and1225). Camera1701may produce a plurality of data of multiple types, e.g., ego motion data, traffic sign data, road data, or the like. The camera data and location data may be segmented into drive segments1705. For example, drive segments1705may each have camera data and location data from less than 1 km of driving. In some embodiments, system1700may remove redundancies in drive segments1705. For example, if a landmark appears in multiple images from camera1701, system1700may strip the redundant data such that the drive segments1705only contain one copy of the location of and any metadata relating to the landmark. By way of further example, if a lane marking appears in multiple images from camera1701, system1700may strip the redundant data such that the drive segments1705only contain one copy of the location of and any metadata relating to the lane marking. System1700also includes a server (e.g., server1230). Server1230may receive drive segments1705from the vehicle and recombine the drive segments1705into a single drive1707. Such an arrangement may allow for reduce bandwidth requirements when transferring data between the vehicle and the server while also allowing for the server to store data relating to an entire drive. FIG.18depicts system1700ofFIG.17further configured for crowdsourcing a sparse map. As inFIG.17, system1700includes vehicle1810, which captures drive data using, for example, a camera (which produces, e.g., ego motion data, traffic sign data, road data, or the like) and a locating device (e.g., a GPS locator). As inFIG.17, vehicle1810segments the collected data into drive segments (depicted as “DS11,” “DS21,” “DSN1” inFIG.18). Server1230then receives the drive segments and reconstructs a drive (depicted as “Drive1” inFIG.18) from the received segments. As further depicted inFIG.18, system1700also receives data from additional vehicles. For example, vehicle1820also captures drive data using, for example, a camera (which produces, e.g., ego motion data, traffic sign data, road data, or the like) and a locating device (e.g., a GPS locator). Similar to vehicle1810, vehicle1820segments the collected data into drive segments (depicted as “DS12,” “DS22,” “DSN2” inFIG.18). Server1230then receives the drive segments and reconstructs a drive (depicted as “Drive2” inFIG.18) from the received segments. Any number of additional vehicles may be used. For example,FIG.18also includes “CAR N” that captures drive data, segments it into drive segments (depicted as “DS1N,” “DS2N,” “DSN N” inFIG.18), and sends it to server1230for reconstruction into a drive (depicted as “Drive N” inFIG.18). As depicted inFIG.18, server1230may construct a sparse map (depicted as “MAP”) using the reconstructed drives (e.g., “Drive1,” “Drive2,” and “Drive N”) collected from a plurality of vehicles (e.g., “CAR 1” (also labeled vehicle1810), “CAR 2” (also labeled vehicle1820), and “CAR N”). FIG.19is a flowchart showing an example process1900for generating a sparse map for autonomous vehicle navigation along a road segment. Process1900may be performed by one or more processing devices included in server1230. Process1900may include receiving a plurality of images acquired as one or more vehicles traverse the road segment (step1905). Server1230may receive images from cameras included within one or more of vehicles1205,1210,1215,1220, and1225. For example, camera122may capture one or more images of the environment surrounding vehicle1205as vehicle1205travels along road segment1200. In some embodiments, server1230may also receive stripped down image data that has had redundancies removed by a processor on vehicle1205, as discussed above with respect toFIG.17. Process1900may further include identifying, based on the plurality of images, at least one line representation of a road surface feature extending along the road segment (step1910). Each line representation may represent a path along the road segment substantially corresponding with the road surface feature. For example, server1230may analyze the environmental images received from camera122to identify a road edge or a lane marking and determine a trajectory of travel along road segment1200associated with the road edge or lane marking. In some embodiments, the trajectory (or line representation) may include a spline, a polynomial representation, or a curve. Server1230may determine the trajectory of travel of vehicle1205based on camera ego motions (e.g., three dimensional translation and/or three dimensional rotational motions) received at step1905. Process1900may also include identifying, based on the plurality of images, a plurality of landmarks associated with the road segment (step1910). For example, server1230may analyze the environmental images received from camera122to identify one or more landmarks, such as road sign along road segment1200. Server1230may identify the landmarks using analysis of the plurality of images acquired as one or more vehicles traverse the road segment. To enable crowdsourcing, the analysis may include rules regarding accepting and rejecting possible landmarks associated with the road segment. For example, the analysis may include accepting potential landmarks when a ratio of images in which the landmark does appear to images in which the landmark does not appear exceeds a threshold and/or rejecting potential landmarks when a ratio of images in which the landmark does not appear to images in which the landmark does appear exceeds a threshold. Process1900may include other operations or steps performed by server1230. For example, the navigation information may include a target trajectory for vehicles to travel along a road segment, and process1900may include clustering, by server1230, vehicle trajectories related to multiple vehicles travelling on the road segment and determining the target trajectory based on the clustered vehicle trajectories, as discussed in further detail below. Clustering vehicle trajectories may include clustering, by server1230, the multiple trajectories related to the vehicles travelling on the road segment into a plurality of clusters based on at least one of the absolute heading of vehicles or lane assignment of the vehicles. Generating the target trajectory may include averaging, by server1230, the clustered trajectories. By way of further example, process1900may include aligning data received in step1905. Other processes or steps performed by server1230, as described above, may also be included in process1900. The disclosed systems and methods may include other features. For example, the disclosed systems may use local coordinates, rather than global coordinates. For autonomous driving, some systems may present data in world coordinates. For example, longitude and latitude coordinates on the earth surface may be used. In order to use the map for steering, the host vehicle may determine its position and orientation relative to the map. It seems natural to use a GPS device on board, in order to position the vehicle on the map and in order to find the rotation transformation between the body reference frame and the world reference frame (e.g., North, East and Down). Once the body reference frame is aligned with the map reference frame, then the desired route may be expressed in the body reference frame and the steering commands may be computed or generated. The disclosed systems and methods may enable autonomous vehicle navigation (e.g., steering control) with low footprint models, which may be collected by the autonomous vehicles themselves without the aid of expensive surveying equipment. To support the autonomous navigation (e.g., steering applications), the road model may include a sparse map having the geometry of the road, its lane structure, and landmarks that may be used to determine the location or position of vehicles along a trajectory included in the model. As discussed above, generation of the sparse map may be performed by a remote server that communicates with vehicles travelling on the road and that receives data from the vehicles. The data may include sensed data, trajectories reconstructed based on the sensed data, and/or recommended trajectories that may represent modified reconstructed trajectories. As discussed below, the server may transmit the model back to the vehicles or other vehicles that later travel on the road to aid in autonomous navigation. FIG.20illustrates a block diagram of server1230. Server1230may include a communication unit2005, which may include both hardware components (e.g., communication control circuits, switches, and antenna), and software components (e.g., communication protocols, computer codes). For example, communication unit2005may include at least one network interface. Server1230may communicate with vehicles1205,1210,1215,1220, and1225through communication unit2005. For example, server1230may receive, through communication unit2005, navigation information transmitted from vehicles1205,1210,1215,1220, and1225. Server1230may distribute, through communication unit2005, the autonomous vehicle road navigation model to one or more autonomous vehicles. Server1230may include at least one non-transitory storage medium2010, such as a hard drive, a compact disc, a tape, etc. Storage device1410may be configured to store data, such as navigation information received from vehicles1205,1210,1215,1220, and1225and/or the autonomous vehicle road navigation model that server1230generates based on the navigation information. Storage device2010may be configured to store any other information, such as a sparse map (e.g., sparse map800discussed above with respect toFIG.8). In addition to or in place of storage device2010, server1230may include a memory2015. Memory2015may be similar to or different from memory140or150. Memory2015may be a non-transitory memory, such as a flash memory, a random access memory, etc. Memory2015may be configured to store data, such as computer codes or instructions executable by a processor (e.g., processor2020), map data (e.g., data of sparse map800), the autonomous vehicle road navigation model, and/or navigation information received from vehicles1205,1210,1215,1220, and1225. Server1230may include at least one processing device2020configured to execute computer codes or instructions stored in memory2015to perform various functions. For example, processing device2020may analyze the navigation information received from vehicles1205,1210,1215,1220, and1225, and generate the autonomous vehicle road navigation model based on the analysis. Processing device2020may control communication unit1405to distribute the autonomous vehicle road navigation model to one or more autonomous vehicles (e.g., one or more of vehicles1205,1210,1215,1220, and1225or any vehicle that travels on road segment1200at a later time). Processing device2020may be similar to or different from processor180,190, or processing unit110. FIG.21illustrates a block diagram of memory2015, which may store computer code or instructions for performing one or more operations for generating a road navigation model for use in autonomous vehicle navigation. As shown inFIG.21, memory2015may store one or more modules for performing the operations for processing vehicle navigation information. For example, memory2015may include a model generating module2105and a model distributing module2110. Processor2020may execute the instructions stored in any of modules2105and2110included in memory2015. Model generating module2105may store instructions which, when executed by processor2020, may generate at least a portion of an autonomous vehicle road navigation model for a common road segment (e.g., road segment1200) based on navigation information received from vehicles1205,1210,1215,1220, and1225. For example, in generating the autonomous vehicle road navigation model, processor2020may cluster vehicle trajectories along the common road segment1200into different clusters. Processor2020may determine a target trajectory along the common road segment1200based on the clustered vehicle trajectories for each of the different clusters. Such an operation may include finding a mean or average trajectory of the clustered vehicle trajectories (e.g., by averaging data representing the clustered vehicle trajectories) in each cluster. In some embodiments, the target trajectory may be associated with a single lane of the common road segment1200. The road model and/or sparse map may store trajectories associated with a road segment. These trajectories may be referred to as target trajectories, which are provided to autonomous vehicles for autonomous navigation. The target trajectories may be received from multiple vehicles, or may be generated based on actual trajectories or recommended trajectories (actual trajectories with some modifications) received from multiple vehicles. The target trajectories included in the road model or sparse map may be continuously updated (e.g., averaged) with new trajectories received from other vehicles. Vehicles travelling on a road segment may collect data by various sensors. The data may include landmarks, road signature profile, vehicle motion (e.g., accelerometer data, speed data), vehicle position (e.g., GPS data), and may either reconstruct the actual trajectories themselves, or transmit the data to a server, which will reconstruct the actual trajectories for the vehicles. In some embodiments, the vehicles may transmit data relating to a trajectory (e.g., a curve in an arbitrary reference frame), landmarks data, and lane assignment along traveling path to server1230. Various vehicles travelling along the same road segment at multiple drives may have different trajectories. Server1230may identify routes or trajectories associated with each lane from the trajectories received from vehicles through a clustering process. FIG.22illustrates a process of clustering vehicle trajectories associated with vehicles1205,1210,1215,1220, and1225for determining a target trajectory for the common road segment (e.g., road segment1200). The target trajectory or a plurality of target trajectories determined from the clustering process may be included in the autonomous vehicle road navigation model or sparse map800. In some embodiments, vehicles1205,1210,1215,1220, and1225traveling along road segment1200may transmit a plurality of trajectories2200to server1230. In some embodiments, server1230may generate trajectories based on landmark, road geometry, and vehicle motion information received from vehicles1205,1210,1215,1220, and1225. To generate the autonomous vehicle road navigation model, server1230may cluster vehicle trajectories1600into a plurality of clusters2205,2210,2215,2220,2225, and2230, as shown inFIG.22. Clustering may be performed using various criteria. In some embodiments, all drives in a cluster may be similar with respect to the absolute heading along the road segment1200. The absolute heading may be obtained from GPS signals received by vehicles1205,1210,1215,1220, and1225. In some embodiments, the absolute heading may be obtained using dead reckoning. Dead reckoning, as one of skill in the art would understand, may be used to determine the current position and hence heading of vehicles1205,1210,1215,1220, and1225by using previously determined position, estimated speed, etc. Trajectories clustered by absolute heading may be useful for identifying routes along the roadways. In some embodiments, all the drives in a cluster may be similar with respect to the lane assignment (e.g., in the same lane before and after a junction) along the drive on road segment1200. Trajectories clustered by lane assignment may be useful for identifying lanes along the roadways. In some embodiments, both criteria (e.g., absolute heading and lane assignment) may be used for clustering. In each cluster2205,2210,2215,2220,2225, and2230, trajectories may be averaged to obtain a target trajectory associated with the specific cluster. For example, the trajectories from multiple drives associated with the same lane cluster may be averaged. The averaged trajectory may be a target trajectory associate with a specific lane. To average a cluster of trajectories, server1230may select a reference frame of an arbitrary trajectory C0. For all other trajectories (C1, . . . Cn), server1230may find a rigid transformation that maps C1 to C0, where i=1, 2, . . . , n, where n is a positive integer number, corresponding to the total number of trajectories included in the cluster. Server1230may compute a mean curve or trajectory in the C0 reference frame. In some embodiments, the landmarks may define an arc length matching between different drives, which may be used for alignment of trajectories with lanes. In some embodiments, lane marks before and after a junction may be used for alignment of trajectories with lanes. To assemble lanes from the trajectories, server1230may select a reference frame of an arbitrary lane. Server1230may map partially overlapping lanes to the selected reference frame. Server1230may continue mapping until all lanes are in the same reference frame. Lanes that are next to each other may be aligned as if they were the same lane, and later they may be shifted laterally. Landmarks recognized along the road segment may be mapped to the common reference frame, first at the lane level, then at the junction level. For example, the same landmarks may be recognized multiple times by multiple vehicles in multiple drives. The data regarding the same landmarks received in different drives may be slightly different. Such data may be averaged and mapped to the same reference frame, such as the C0 reference frame. Additionally or alternatively, the variance of the data of the same landmark received in multiple drives may be calculated. In some embodiments, each lane of road segment120may be associated with a target trajectory and certain landmarks. The target trajectory or a plurality of such target trajectories may be included in the autonomous vehicle road navigation model, which may be used later by other autonomous vehicles travelling along the same road segment1200. Landmarks identified by vehicles1205,1210,1215,1220, and1225while the vehicles travel along road segment1200may be recorded in association with the target trajectory. The data of the target trajectories and landmarks may be continuously or periodically updated with new data received from other vehicles in subsequent drives. For localization of an autonomous vehicle, the disclosed systems and methods may use an Extended Kalman Filter. The location of the vehicle may be determined based on three dimensional position data and/or three dimensional orientation data, prediction of future location ahead of vehicle's current location by integration of ego motion. The localization of vehicle may be corrected or adjusted by image observations of landmarks. For example, when vehicle detects a landmark within an image captured by the camera, the landmark may be compared to a known landmark stored within the road model or sparse map800. The known landmark may have a known location (e.g., GPS data) along a target trajectory stored in the road model and/or sparse map800. Based on the current speed and images of the landmark, the distance from the vehicle to the landmark may be estimated. The location of the vehicle along a target trajectory may be adjusted based on the distance to the landmark and the landmark's known location (stored in the road model or sparse map800). The landmark's position/location data (e.g., mean values from multiple drives) stored in the road model and/or sparse map800may be presumed to be accurate. In some embodiments, the disclosed system may form a closed loop subsystem, in which estimation of the vehicle six degrees of freedom location (e.g., three dimensional position data plus three dimensional orientation data) may be used for navigating (e.g., steering the wheel of) the autonomous vehicle to reach a desired point (e.g., 1.3 second ahead in the stored). In turn, data measured from the steering and actual navigation may be used to estimate the six degrees of freedom location. In some embodiments, poles along a road, such as lampposts and power or cable line poles may be used as landmarks for localizing the vehicles. Other landmarks such as traffic signs, traffic lights, arrows on the road, stop lines, as well as static features or signatures of an object along the road segment may also be used as landmarks for localizing the vehicle. When poles are used for localization, the x observation of the poles (i.e., the viewing angle from the vehicle) may be used, rather than the y observation (i.e., the distance to the pole) since the bottoms of the poles may be occluded and sometimes they are not on the road plane. FIG.23illustrates a navigation system for a vehicle, which may be used for autonomous navigation using a crowdsourced sparse map. For illustration, the vehicle is referenced as vehicle1205. The vehicle shown inFIG.23may be any other vehicle disclosed herein, including, for example, vehicles1210,1215,1220, and1225, as well as vehicle200shown in other embodiments. As shown inFIG.12, vehicle1205may communicate with server1230. Vehicle1205may include an image capture device122(e.g., camera122). Vehicle1205may include a navigation system2300configured for providing navigation guidance for vehicle1205to travel on a road (e.g., road segment1200). Vehicle1205may also include other sensors, such as a speed sensor2320and an accelerometer2325. Speed sensor2320may be configured to detect the speed of vehicle1205. Accelerometer2325may be configured to detect an acceleration or deceleration of vehicle1205. Vehicle1205shown inFIG.23may be an autonomous vehicle, and the navigation system2300may be used for providing navigation guidance for autonomous driving. Alternatively, vehicle1205may also be a non-autonomous, human-controlled vehicle, and navigation system2300may still be used for providing navigation guidance. Navigation system2300may include a communication unit2305configured to communicate with server1230through communication path1235. Navigation system2300may also include a GPS unit2310configured to receive and process GPS signals. Navigation system2300may further include at least one processor2315configured to process data, such as GPS signals, map data from sparse map800(which may be stored on a storage device provided onboard vehicle1205and/or received from server1230), road geometry sensed by a road profile sensor2330, images captured by camera122, and/or autonomous vehicle road navigation model received from server1230. The road profile sensor2330may include different types of devices for measuring different types of road profile, such as road surface roughness, road width, road elevation, road curvature, etc. For example, the road profile sensor2330may include a device that measures the motion of a suspension of vehicle2305to derive the road roughness profile. In some embodiments, the road profile sensor2330may include radar sensors to measure the distance from vehicle1205to road sides (e.g., barrier on the road sides), thereby measuring the width of the road. In some embodiments, the road profile sensor2330may include a device configured for measuring the up and down elevation of the road. In some embodiment, the road profile sensor2330may include a device configured to measure the road curvature. For example, a camera (e.g., camera122or another camera) may be used to capture images of the road showing road curvatures. Vehicle1205may use such images to detect road curvatures. The at least one processor2315may be programmed to receive, from camera122, at least one environmental image associated with vehicle1205. The at least one processor2315may analyze the at least one environmental image to determine navigation information related to the vehicle1205. The navigation information may include a trajectory related to the travel of vehicle1205along road segment1200. The at least one processor2315may determine the trajectory based on motions of camera122(and hence the vehicle), such as three dimensional translation and three dimensional rotational motions. In some embodiments, the at least one processor2315may determine the translation and rotational motions of camera122based on analysis of a plurality of images acquired by camera122. In some embodiments, the navigation information may include lane assignment information (e.g., in which lane vehicle1205is travelling along road segment1200). The navigation information transmitted from vehicle1205to server1230may be used by server1230to generate and/or update an autonomous vehicle road navigation model, which may be transmitted back from server1230to vehicle1205for providing autonomous navigation guidance for vehicle1205. The at least one processor2315may also be programmed to transmit the navigation information from vehicle1205to server1230. In some embodiments, the navigation information may be transmitted to server1230along with road information. The road location information may include at least one of the GPS signal received by the GPS unit2310, landmark information, road geometry, lane information, etc. The at least one processor2315may receive, from server1230, the autonomous vehicle road navigation model or a portion of the model. The autonomous vehicle road navigation model received from server1230may include at least one update based on the navigation information transmitted from vehicle1205to server1230. The portion of the model transmitted from server1230to vehicle1205may include an updated portion of the model. The at least one processor2315may cause at least one navigational maneuver (e.g., steering such as making a turn, braking, accelerating, passing another vehicle, etc.) by vehicle1205based on the received autonomous vehicle road navigation model or the updated portion of the model. The at least one processor2315may be configured to communicate with various sensors and components included in vehicle1205, including communication unit1705, GPS unit2315, camera122, speed sensor2320, accelerometer2325, and road profile sensor2330. The at least one processor2315may collect information or data from various sensors and components, and transmit the information or data to server1230through communication unit2305. Alternatively or additionally, various sensors or components of vehicle1205may also communicate with server1230and transmit data or information collected by the sensors or components to server1230. In some embodiments, vehicles1205,1210,1215,1220, and1225may communicate with each other, and may share navigation information with each other, such that at least one of the vehicles1205,1210,1215,1220, and1225may generate the autonomous vehicle road navigation model using crowdsourcing, e.g., based on information shared by other vehicles. In some embodiments, vehicles1205,1210,1215,1220, and1225may share navigation information with each other and each vehicle may update its own the autonomous vehicle road navigation model provided in the vehicle. In some embodiments, at least one of the vehicles1205,1210,1215,1220, and1225(e.g., vehicle1205) may function as a hub vehicle. The at least one processor2315of the hub vehicle (e.g., vehicle1205) may perform some or all of the functions performed by server1230. For example, the at least one processor2315of the hub vehicle may communicate with other vehicles and receive navigation information from other vehicles. The at least one processor2315of the hub vehicle may generate the autonomous vehicle road navigation model or an update to the model based on the shared information received from other vehicles. The at least one processor2315of the hub vehicle may transmit the autonomous vehicle road navigation model or the update to the model to other vehicles for providing autonomous navigation guidance. Navigation Based on Sparse Maps As previously discussed, the autonomous vehicle road navigation model including sparse map800may include a plurality of mapped lane marks and a plurality of mapped objects/features associated with a road segment. As discussed in greater detail below, these mapped lane marks, objects, and features may be used when the autonomous vehicle navigates. For example, in some embodiments, the mapped objects and features may be used to localized a host vehicle relative to the map (e.g., relative to a mapped target trajectory). The mapped lane marks may be used (e.g., as a check) to determine a lateral position and/or orientation relative to a planned or target trajectory. With this position information, the autonomous vehicle may be able to adjust a heading direction to match a direction of a target trajectory at the determined position. Vehicle200may be configured to detect lane marks in a given road segment. The road segment may include any markings on a road for guiding vehicle traffic on a roadway. For example, the lane marks may be continuous or dashed lines demarking the edge of a lane of travel. The lane marks may also include double lines, such as a double continuous lines, double dashed lines or a combination of continuous and dashed lines indicating, for example, whether passing is permitted in an adjacent lane. The lane marks may also include freeway entrance and exit markings indicating, for example, a deceleration lane for an exit ramp or dotted lines indicating that a lane is turn-only or that the lane is ending. The markings may further indicate a work zone, a temporary lane shift, a path of travel through an intersection, a median, a special purpose lane (e.g., a bike lane, HOV lane, etc.), or other miscellaneous markings (e.g., crosswalk, a speed hump, a railway crossing, a stop line, etc.). Vehicle200may use cameras, such as image capture devices122and124included in image acquisition unit120, to capture images of the surrounding lane marks. Vehicle200may analyze the images to detect point locations associated with the lane marks based on features identified within one or more of the captured images. These point locations may be uploaded to a server to represent the lane marks in sparse map800. Depending on the position and field of view of the camera, lane marks may be detected for both sides of the vehicle simultaneously from a single image. In other embodiments, different cameras may be used to capture images on multiple sides of the vehicle. Rather than uploading actual images of the lane marks, the marks may be stored in sparse map800as a spline or a series of points, thus reducing the size of sparse map800and/or the data that must be uploaded remotely by the vehicle. FIGS.24A-24Dillustrate exemplary point locations that may be detected by vehicle200to represent particular lane marks. Similar to the landmarks described above, vehicle200may use various image recognition algorithms or software to identify point locations within a captured image. For example, vehicle200may recognize a series of edge points, corner points or various other point locations associated with a particular lane mark.FIG.24Ashows a continuous lane mark2410that may be detected by vehicle200. Lane mark2410may represent the outside edge of a roadway, represented by a continuous white line. As shown inFIG.24A, vehicle200may be configured to detect a plurality of edge location points2411along the lane mark. Location points2411may be collected to represent the lane mark at any intervals sufficient to create a mapped lane mark in the sparse map. For example, the lane mark may be represented by one point per meter of the detected edge, one point per every five meters of the detected edge, or at other suitable spacings. In some embodiments, the spacing may be determined by other factors, rather than at set intervals such as, for example, based on points where vehicle200has a highest confidence ranking of the location of the detected points. AlthoughFIG.24Ashows edge location points on an interior edge of lane mark2410, points may be collected on the outside edge of the line or along both edges. Further, while a single line is shown inFIG.24A, similar edge points may be detected for a double continuous line. For example, points2411may be detected along an edge of one or both of the continuous lines. Vehicle200may also represent lane marks differently depending on the type or shape of lane mark.FIG.24Bshows an exemplary dashed lane mark2420that may be detected by vehicle200. Rather than identifying edge points, as inFIG.24A, vehicle may detect a series of corner points2421representing corners of the lane dashes to define the full boundary of the dash. WhileFIG.24Bshows each corner of a given dash marking being located, vehicle200may detect or upload a subset of the points shown in the figure. For example, vehicle200may detect the leading edge or leading corner of a given dash mark, or may detect the two corner points nearest the interior of the lane. Further, not every dash mark may be captured, for example, vehicle200may capture and/or record points representing a sample of dash marks (e.g., every other, every third, every fifth, etc.) or dash marks at a predefined spacing (e.g., every meter, every five meters, every 10 meters, etc.) Corner points may also be detected for similar lane marks, such as markings showing a lane is for an exit ramp, that a particular lane is ending, or other various lane marks that may have detectable corner points. Corner points may also be detected for lane marks consisting of double dashed lines or a combination of continuous and dashed lines. In some embodiments, the points uploaded to the server to generate the mapped lane marks may represent other points besides the detected edge points or corner points.FIG.24Cillustrates a series of points that may represent a centerline of a given lane mark. For example, continuous lane2410may be represented by centerline points2441along a centerline2440of the lane mark. In some embodiments, vehicle200may be configured to detect these center points using various image recognition techniques, such as convolutional neural networks (CNN), scale-invariant feature transform (SIFT), histogram of oriented gradients (HOG) features, or other techniques. Alternatively, vehicle200may detect other points, such as edge points2411shown inFIG.24A, and may calculate centerline points2441, for example, by detecting points along each edge and determining a midpoint between the edge points. Similarly, dashed lane mark2420may be represented by centerline points2451along a centerline2450of the lane mark. The centerline points may be located at the edge of a dash, as shown inFIG.24C, or at various other locations along the centerline. For example, each dash may be represented by a single point in the geometric center of the dash. The points may also be spaced at a predetermined interval along the centerline (e.g., every meter, 5 meters, 10 meters, etc.). The centerline points2451may be detected directly by vehicle200, or may be calculated based on other detected reference points, such as corner points2421, as shown inFIG.24B. A centerline may also be used to represent other lane mark types, such as a double line, using similar techniques as above. In some embodiments, vehicle200may identify points representing other features, such as a vertex between two intersecting lane marks.FIG.24Dshows exemplary points representing an intersection between two lane marks2460and2465. Vehicle200may calculate a vertex point2466representing an intersection between the two lane marks. For example, one of lane marks2460or2465may represent a train crossing area or other crossing area in the road segment. While lane marks2460and2465are shown as crossing each other perpendicularly, various other configurations may be detected. For example, the lane marks2460and2465may cross at other angles, or one or both of the lane marks may terminate at the vertex point2466. Similar techniques may also be applied for intersections between dashed or other lane mark types. In addition to vertex point2466, various other points2467may also be detected, providing further information about the orientation of lane marks2460and2465. Vehicle200may associate real-world coordinates with each detected point of the lane mark. For example, location identifiers may be generated, including coordinate for each point, to upload to a server for mapping the lane mark. The location identifiers may further include other identifying information about the points, including whether the point represents a corner point, an edge point, center point, etc. Vehicle200may therefore be configured to determine a real-world position of each point based on analysis of the images. For example, vehicle200may detect other features in the image, such as the various landmarks described above, to locate the real-world position of the lane marks. This may involve determining the location of the lane marks in the image relative to the detected landmark or determining the position of the vehicle based on the detected landmark and then determining a distance from the vehicle (or target trajectory of the vehicle) to the lane mark. When a landmark is not available, the location of the lane mark points may be determined relative to a position of the vehicle determined based on dead reckoning. The real-world coordinates included in the location identifiers may be represented as absolute coordinates (e.g., latitude/longitude coordinates), or may be relative to other features, such as based on a longitudinal position along a target trajectory and a lateral distance from the target trajectory. The location identifiers may then be uploaded to a server for generation of the mapped lane marks in the navigation model (such as sparse map800). In some embodiments, the server may construct a spline representing the lane marks of a road segment. Alternatively, vehicle200may generate the spline and upload it to the server to be recorded in the navigational model. FIG.24Eshows an exemplary navigation model or sparse map for a corresponding road segment that includes mapped lane marks. The sparse map may include a target trajectory2475for a vehicle to follow along a road segment. As described above, target trajectory2475may represent an ideal path for a vehicle to take as it travels the corresponding road segment, or may be located elsewhere on the road (e.g., a centerline of the road, etc.). Target trajectory2475may be calculated in the various methods described above, for example, based on an aggregation (e.g., a weighted combination) of two or more reconstructed trajectories of vehicles traversing the same road segment. In some embodiments, the target trajectory may be generated equally for all vehicle types and for all road, vehicle, and/or environment conditions. In other embodiments, however, various other factors or variables may also be considered in generating the target trajectory. A different target trajectory may be generated for different types of vehicles (e.g., a private car, a light truck, and a full trailer). For example, a target trajectory with relatively tighter turning radii may be generated for a small private car than a larger semi-trailer truck. In some embodiments, road, vehicle and environmental conditions may be considered as well. For example, a different target trajectory may be generated for different road conditions (e.g., wet, snowy, icy, dry, etc.), vehicle conditions (e.g., tire condition or estimated tire condition, brake condition or estimated brake condition, amount of fuel remaining, etc.) or environmental factors (e.g., time of day, visibility, weather, etc.). The target trajectory may also depend on one or more aspects or features of a particular road segment (e.g., speed limit, frequency and size of turns, grade, etc.). In some embodiments, various user settings may also be used to determine the target trajectory, such as a set driving mode (e.g., desired driving aggressiveness, economy mode, etc.). The sparse map may also include mapped lane marks2470and2480representing lane marks along the road segment. The mapped lane marks may be represented by a plurality of location identifiers2471and2481. As described above, the location identifiers may include locations in real world coordinates of points associated with a detected lane mark. Similar to the target trajectory in the model, the lane marks may also include elevation data and may be represented as a curve in three-dimensional space. For example, the curve may be a spline connecting three dimensional polynomials of suitable order the curve may be calculated based on the location identifiers. The mapped lane marks may also include other information or metadata about the lane mark, such as an identifier of the type of lane mark (e.g., between two lanes with the same direction of travel, between two lanes of opposite direction of travel, edge of a roadway, etc.) and/or other characteristics of the lane mark (e.g., continuous, dashed, single line, double line, yellow, white, etc.). In some embodiments, the mapped lane marks may be continuously updated within the model, for example, using crowdsourcing techniques. The same vehicle may upload location identifiers during multiple occasions of travelling the same road segment or data may be selected from a plurality of vehicles (such as1205,1210,1215,1220, and1225) travelling the road segment at different times. Sparse map800may then be updated or refined based on subsequent location identifiers received from the vehicles and stored in the system. As the mapped lane marks are updated and refined, the updated road navigation model and/or sparse map may be distributed to a plurality of autonomous vehicles. Generating the mapped lane marks in the sparse map may also include detecting and/or mitigating errors based on anomalies in the images or in the actual lane marks themselves.FIG.24Fshows an exemplary anomaly2495associated with detecting a lane mark2490. Anomaly2495may appear in the image captured by vehicle200, for example, from an object obstructing the camera's view of the lane mark, debris on the lens, etc. In some instances, the anomaly may be due to the lane mark itself, which may be damaged or worn away, or partially covered, for example, by dirt, debris, water, snow or other materials on the road. Anomaly2495may result in an erroneous point2491being detected by vehicle200. Sparse map800may provide the correct the mapped lane mark and exclude the error. In some embodiments, vehicle200may detect erroneous point2491for example, by detecting anomaly2495in the image, or by identifying the error based on detected lane mark points before and after the anomaly. Based on detecting the anomaly, the vehicle may omit point2491or may adjust it to be in line with other detected points. In other embodiments, the error may be corrected after the point has been uploaded, for example, by determining the point is outside of an expected threshold based on other points uploaded during the same trip, or based on an aggregation of data from previous trips along the same road segment. The mapped lane marks in the navigation model and/or sparse map may also be used for navigation by an autonomous vehicle traversing the corresponding roadway. For example, a vehicle navigating along a target trajectory may periodically use the mapped lane marks in the sparse map to align itself with the target trajectory. As mentioned above, between landmarks the vehicle may navigate based on dead reckoning in which the vehicle uses sensors to determine its ego motion and estimate its position relative to the target trajectory. Errors may accumulate over time and vehicle's position determinations relative to the target trajectory may become increasingly less accurate. Accordingly, the vehicle may use lane marks occurring in sparse map800(and their known locations) to reduce the dead reckoning-induced errors in position determination. In this way, the identified lane marks included in sparse map800may serve as navigational anchors from which an accurate position of the vehicle relative to a target trajectory may be determined. FIG.25Ashows an exemplary image2500of a vehicle's surrounding environment that may be used for navigation based on the mapped lane marks. Image2500may be captured, for example, by vehicle200through image capture devices122and124included in image acquisition unit120. Image2500may include an image of at least one lane mark2510, as shown inFIG.25A. Image2500may also include one or more landmarks2521, such as road sign, used for navigation as described above. Some elements shown inFIG.25A, such as elements2511,2530, and2520which do not appear in the captured image2500but are detected and/or determined by vehicle200are also shown for reference. Using the various techniques described above with respect toFIGS.24A-Dand24F, a vehicle may analyze image2500to identify lane mark2510. Various points2511may be detected corresponding to features of the lane mark in the image. Points2511, for example, may correspond to an edge of the lane mark, a corner of the lane mark, a midpoint of the lane mark, a vertex between two intersecting lane marks, or various other features or locations. Points2511may be detected to correspond to a location of points stored in a navigation model received from a server. For example, if a sparse map is received containing points that represent a centerline of a mapped lane mark, points2511may also be detected based on a centerline of lane mark2510. The vehicle may also determine a longitudinal position represented by element2520and located along a target trajectory. Longitudinal position2520may be determined from image2500, for example, by detecting landmark2521within image2500and comparing a measured location to a known landmark location stored in the road model or sparse map800. The location of the vehicle along a target trajectory may then be determined based on the distance to the landmark and the landmark's known location. The longitudinal position2520may also be determined from images other than those used to determine the position of a lane mark. For example, longitudinal position2520may be determined by detecting landmarks in images from other cameras within image acquisition unit120taken simultaneously or near simultaneously to image2500. In some instances, the vehicle may not be near any landmarks or other reference points for determining longitudinal position2520. In such instances, the vehicle may be navigating based on dead reckoning and thus may use sensors to determine its ego motion and estimate a longitudinal position2520relative to the target trajectory. The vehicle may also determine a distance2530representing the actual distance between the vehicle and lane mark2510observed in the captured image(s). The camera angle, the speed of the vehicle, the width of the vehicle, or various other factors may be accounted for in determining distance2530. FIG.25Billustrates a lateral localization correction of the vehicle based on the mapped lane marks in a road navigation model. As described above, vehicle200may determine a distance2530between vehicle200and a lane mark2510using one or more images captured by vehicle200. Vehicle200may also have access to a road navigation model, such as sparse map800, which may include a mapped lane mark2550and a target trajectory2555. Mapped lane mark2550may be modeled using the techniques described above, for example using crowdsourced location identifiers captured by a plurality of vehicles. Target trajectory2555may also be generated using the various techniques described previously. Vehicle200may also determine or estimate a longitudinal position2520along target trajectory2555as described above with respect toFIG.25A. Vehicle200may then determine an expected distance2540based on a lateral distance between target trajectory2555and mapped lane mark2550corresponding to longitudinal position2520. The lateral localization of vehicle200may be corrected or adjusted by comparing the actual distance2530, measured using the captured image(s), with the expected distance2540from the model. FIGS.25C and25Dprovide illustrations associated with another example for localizing a host vehicle during navigation based on mapped landmarks/objects/features in a sparse map.FIG.25Cconceptually represents a series of images captured from a vehicle navigating along a road segment2560. In this example, road segment2560includes a straight section of a two-lane divided highway delineated by road edges2561and2562and center lane marking2563. As shown, the host vehicle is navigating along a lane2564, which is associated with a mapped target trajectory2565. Thus, in an ideal situation (and without influencers such as the presence of target vehicles or objects in the roadway, etc.) the host vehicle should closely track the mapped target trajectory2565as it navigates along lane2564of road segment2560. In reality, the host vehicle may experience drift as it navigates along mapped target trajectory2565. For effective and safe navigation, this drift should be maintained within acceptable limits (e.g., +/−10 cm of lateral displacement from target trajectory2565or any other suitable threshold). To periodically account for drift and to make any needed course corrections to ensure that the host vehicle follows target trajectory2565, the disclosed navigation systems may be able to localize the host vehicle along the target trajectory2565(e.g., determine a lateral and longitudinal position of the host vehicle relative to the target trajectory2565) using one or more mapped features/objects included in the sparse map. As a simple example,FIG.25Cshows a speed limit sign2566as it may appear in five different, sequentially captured images as the host vehicle navigates along road segment2560. For example, at a first time, to, sign2566may appear in a captured image near the horizon. As the host vehicle approaches sign2566, in subsequentially captured images at times t1, t2, t3, and t4, sign2566will appear at different 2D X-Y pixel locations of the captured images. For example, in the captured image space, sign2566will move downward and to the right along curve2567(e.g., a curve extending through the center of the sign in each of the five captured image frames). Sign2566will also appear to increase in size as it is approached by the host vehicle (i.e., it will occupy a great number of pixels in subsequently captured images). These changes in the image space representations of an object, such as sign2566, may be exploited to determine a localized position of the host vehicle along a target trajectory. For example, as described in the present disclosure, any detectable object or feature, such as a semantic feature like sign2566or a detectable non-semantic feature, may be identified by one or more harvesting vehicles that previously traversed a road segment (e.g., road segment2560). A mapping server may collect the harvested drive information from a plurality of vehicles, aggregate and correlate that information, and generate a sparse map including, for example, a target trajectory2565for lane2564of road segment2560. The sparse map may also store a location of sign2566(along with type information, etc.). During navigation (e.g., prior to entering road segment2560), a host vehicle may be supplied with a map tile including a sparse map for road segment2560. To navigate in lane2564of road segment2560, the host vehicle may follow mapped target trajectory2565. The mapped representation of sign2566may be used by the host vehicle to localize itself relative to the target trajectory. For example, a camera on the host vehicle will capture an image2570of the environment of the host vehicle, and that captured image2570may include an image representation of sign2566having a certain size and a certain X-Y image location, as shown inFIG.25D. This size and X-Y image location can be used to determine the host vehicle's position relative to target trajectory2565. For example, based on the sparse map including a representation of sign2566, a navigation processor of the host vehicle can determine that in response to the host vehicle traveling along target trajectory2565, a representation of sign2566should appear in captured images such that a center of sign2566will move (in image space) along line2567. If a captured image, such as image2570, shows the center (or other reference point) displaced from line2567(e.g., the expected image space trajectory), then the host vehicle navigation system can determine that at the time of the captured image it was not located on target trajectory2565. From the image, however, the navigation processor can determine an appropriate navigational correction to return the host vehicle to the target trajectory2565. For example, if analysis shows an image location of sign2566that is displaced in the image by a distance2572to the left of the expected image space location on line2567, then the navigation processor may cause a heading change by the host vehicle (e.g., change the steering angle of the wheels) to move the host vehicle leftward by a distance2573. In this way, each captured image can be used as part of a feedback loop process such that a difference between an observed image position of sign2566and expected image trajectory2567may be minimized to ensure that the host vehicle continues along target trajectory2565with little to no deviation. Of course, the more mapped objects that are available, the more often the described localization technique may be employed, which can reduce or eliminate drift-induced deviations from target trajectory2565. The process described above may be useful for detecting a lateral orientation or displacement of the host vehicle relative to a target trajectory. Localization of the host vehicle relative to target trajectory2565may also include a determination of a longitudinal location of the target vehicle along the target trajectory. For example, captured image2570includes a representation of sign2566as having a certain image size (e.g., 2D X-Y pixel area). This size can be compared to an expected image size of mapped sign2566as it travels through image space along line2567(e.g., as the size of the sign progressively increases, as shown inFIG.25C). Based on the image size of sign2566in image2570, and based on the expected size progression in image space relative to mapped target trajectory2565, the host vehicle can determine its longitudinal position (at the time when image2570was captured) relative to target trajectory2565. This longitudinal position coupled with any lateral displacement relative to target trajectory2565, as described above, allows for full localization of the host vehicle relative to target trajectory2565, as the host vehicle navigates along road2560. FIGS.25C and25Dprovide just one example of the disclosed localization technique using a single mapped object and a single target trajectory. In other examples, there may be many more target trajectories (e.g., one target trajectory for each viable lane of a multi-lane highway, urban street, complex junction, etc.) and there may be many more mapped available for localization. For example, a sparse map representative of an urban environment may include many objects per meter available for localization. FIG.26Ais a flowchart showing an exemplary process2600A for mapping a lane mark for use in autonomous vehicle navigation, consistent with disclosed embodiments. At step2610, process2600A may include receiving two or more location identifiers associated with a detected lane mark. For example, step2610may be performed by server1230or one or more processors associated with the server. The location identifiers may include locations in real-world coordinates of points associated with the detected lane mark, as described above with respect toFIG.24E. In some embodiments, the location identifiers may also contain other data, such as additional information about the road segment or the lane mark. Additional data may also be received during step2610, such as accelerometer data, speed data, landmarks data, road geometry or profile data, vehicle positioning data, ego motion data, or various other forms of data described above. The location identifiers may be generated by a vehicle, such as vehicles1205,1210,1215,1220, and1225, based on images captured by the vehicle. For example, the identifiers may be determined based on acquisition, from a camera associated with a host vehicle, of at least one image representative of an environment of the host vehicle, analysis of the at least one image to detect the lane mark in the environment of the host vehicle, and analysis of the at least one image to determine a position of the detected lane mark relative to a location associated with the host vehicle. As described above, the lane mark may include a variety of different marking types, and the location identifiers may correspond to a variety of points relative to the lane mark. For example, where the detected lane mark is part of a dashed line marking a lane boundary, the points may correspond to detected corners of the lane mark. Where the detected lane mark is part of a continuous line marking a lane boundary, the points may correspond to a detected edge of the lane mark, with various spacings as described above. In some embodiments, the points may correspond to the centerline of the detected lane mark, as shown inFIG.24C, or may correspond to a vertex between two intersecting lane marks and at least one two other points associated with the intersecting lane marks, as shown inFIG.24D. At step2612, process2600A may include associating the detected lane mark with a corresponding road segment. For example, server1230may analyze the real-world coordinates, or other information received during step2610, and compare the coordinates or other information to location information stored in an autonomous vehicle road navigation model. Server1230may determine a road segment in the model that corresponds to the real-world road segment where the lane mark was detected. At step2614, process2600A may include updating an autonomous vehicle road navigation model relative to the corresponding road segment based on the two or more location identifiers associated with the detected lane mark. For example, the autonomous road navigation model may be sparse map800, and server1230may update the sparse map to include or adjust a mapped lane mark in the model. Server1230may update the model based on the various methods or processes described above with respect toFIG.24E. In some embodiments, updating the autonomous vehicle road navigation model may include storing one or more indicators of position in real world coordinates of the detected lane mark. The autonomous vehicle road navigation model may also include a at least one target trajectory for a vehicle to follow along the corresponding road segment, as shown inFIG.24E. At step2616, process2600A may include distributing the updated autonomous vehicle road navigation model to a plurality of autonomous vehicles. For example, server1230may distribute the updated autonomous vehicle road navigation model to vehicles1205,1210,1215,1220, and1225, which may use the model for navigation. The autonomous vehicle road navigation model may be distributed via one or more networks (e.g., over a cellular network and/or the Internet, etc.), through wireless communication paths1235, as shown inFIG.12. In some embodiments, the lane marks may be mapped using data received from a plurality of vehicles, such as through a crowdsourcing technique, as described above with respect toFIG.24E. For example, process2600A may include receiving a first communication from a first host vehicle, including location identifiers associated with a detected lane mark, and receiving a second communication from a second host vehicle, including additional location identifiers associated with the detected lane mark. For example, the second communication may be received from a subsequent vehicle travelling on the same road segment, or from the same vehicle on a subsequent trip along the same road segment. Process2600A may further include refining a determination of at least one position associated with the detected lane mark based on the location identifiers received in the first communication and based on the additional location identifiers received in the second communication. This may include using an average of the multiple location identifiers and/or filtering out “ghost” identifiers that may not reflect the real-world position of the lane mark. FIG.26Bis a flowchart showing an exemplary process2600B for autonomously navigating a host vehicle along a road segment using mapped lane marks. Process2600B may be performed, for example, by processing unit110of autonomous vehicle200. At step2620, process2600B may include receiving from a server-based system an autonomous vehicle road navigation model. In some embodiments, the autonomous vehicle road navigation model may include a target trajectory for the host vehicle along the road segment and location identifiers associated with one or more lane marks associated with the road segment. For example, vehicle200may receive sparse map800or another road navigation model developed using process2600A. In some embodiments, the target trajectory may be represented as a three-dimensional spline, for example, as shown inFIG.9B. As described above with respect toFIGS.24A-F, the location identifiers may include locations in real world coordinates of points associated with the lane mark (e.g., corner points of a dashed lane mark, edge points of a continuous lane mark, a vertex between two intersecting lane marks and other points associated with the intersecting lane marks, a centerline associated with the lane mark, etc.). At step2621, process2600B may include receiving at least one image representative of an environment of the vehicle. The image may be received from an image capture device of the vehicle, such as through image capture devices122and124included in image acquisition unit120. The image may include an image of one or more lane marks, similar to image2500described above. At step2622, process2600B may include determining a longitudinal position of the host vehicle along the target trajectory. As described above with respect toFIG.25A, this may be based on other information in the captured image (e.g., landmarks, etc.) or by dead reckoning of the vehicle between detected landmarks. At step2623, process2600B may include determining an expected lateral distance to the lane mark based on the determined longitudinal position of the host vehicle along the target trajectory and based on the two or more location identifiers associated with the at least one lane mark. For example, vehicle200may use sparse map800to determine an expected lateral distance to the lane mark. As shown inFIG.25B, longitudinal position2520along a target trajectory2555may be determined in step2622. Using spare map800, vehicle200may determine an expected distance2540to mapped lane mark2550corresponding to longitudinal position2520. At step2624, process2600B may include analyzing the at least one image to identify the at least one lane mark. Vehicle200, for example, may use various image recognition techniques or algorithms to identify the lane mark within the image, as described above. For example, lane mark2510may be detected through image analysis of image2500, as shown inFIG.25A. At step2625, process2600B may include determining an actual lateral distance to the at least one lane mark based on analysis of the at least one image. For example, the vehicle may determine a distance2530, as shown inFIG.25A, representing the actual distance between the vehicle and lane mark2510. The camera angle, the speed of the vehicle, the width of the vehicle, the position of the camera relative to the vehicle, or various other factors may be accounted for in determining distance2530. At step2626, process2600B may include determining an autonomous steering action for the host vehicle based on a difference between the expected lateral distance to the at least one lane mark and the determined actual lateral distance to the at least one lane mark. For example, as described above with respect toFIG.25B, vehicle200may compare actual distance2530with an expected distance2540. The difference between the actual and expected distance may indicate an error (and its magnitude) between the vehicle's actual position and the target trajectory to be followed by the vehicle. Accordingly, the vehicle may determine an autonomous steering action or other autonomous action based on the difference. For example, if actual distance2530is less than expected distance2540, as shown in FIG. the vehicle may determine an autonomous steering action to direct the vehicle left, away from lane mark2510. Thus, the vehicle's position relative to the target trajectory may be corrected. Process2600B may be used, for example, to improve navigation of the vehicle between landmarks. Processes2600A and2600B provide examples only of techniques that may be used for navigating a host vehicle using the disclosed sparse maps. In other examples, processes consistent with those described relative toFIGS.25C and25Dmay also be employed. Stereo-Assist Network for Determining an Object's Location As described herein, navigation systems, including those for autonomous vehicles or semi-autonomous vehicles, may navigate using images captured from within the environment of the vehicle. For example, this may include analyzing images to identify objects such as vehicles, pedestrians, traffic signs, or various other objects and determine their position relative to the host vehicle. In some embodiments, these objects may be located through the application of stereo image analysis of first and second sets of images acquired by different image capture devices. For example, the relative positions of objects represented in the images captured from different angles may be analyzed to extract 3D position information. Using existing stereo vision techniques, however, often requires tracking objects across multiple image frames in multiple sets of images in order to accurately determine a 3D position of an object. This form of processing can be demanding in terms of storage and processing bandwidth for a vehicle navigation system. Further, this involves having a single processing unit access multiple image frames from multiple image capture devices, which may not always be feasible and may lead to inefficiencies in processing capability. For example, requiring all images to be fed to a single processing unit places a large demand on this processing unit that could otherwise be distributed for increased efficiency. To address these and other issues, a host vehicle may use multiple trained models to determine 3D positions of objects relative to the host vehicle based on stereo images. In some embodiments, images captured by first and second cameras (or portions of images) may be processed using first and second trained models, respectively, to generate signature encodings of the images. The signature encodings from different cameras may be input into a third trained model to generate 3D position information. In some embodiments, the first and second trained models may be implemented using processing units associated with the first and second cameras, and the third trained model may be implemented by a central processor. Accordingly, in some embodiments, the central processing unit may only receive signature encodings of the images, rather than the images themselves, which may reduce the bandwidth and processing demands on the central processor. Further, the use of trained models may allow 3D position information to be determined from single image frames from each camera, rather than requiring tracking an object over multiple frames from each camera over time. The disclosed embodiments thus provide improved efficiency, accuracy, and performance over conventional object detection techniques. FIG.27is a diagrammatic representation of a host vehicle2700with multiple cameras for implementing a stereo-assist network, consistent with the disclosed embodiments. Host vehicle2700may be an autonomous or semi-autonomous vehicle, as described above. Host vehicle2700may include one or more cameras onboard host vehicle2700. For example, this may include cameras2720,2730,2740,2750,2760, and2770, as shown inFIG.27. Cameras2720,2730,2740,2750,2760, and2770may be positioned in different locations and/or orientations relative to host vehicle2700such that each of cameras2720,2730,2740,2750,2760, and2770provide different fields of view relative to host vehicle2700. Host vehicle2700may further include a processor2710for determining 3D position information of objects relative to host vehicle2700. In some embodiments, host vehicle2700may correspond to vehicle200discussed above. Accordingly, any of the features or embodiments described herein in reference to vehicle200may also apply to host vehicle2700. For example, one or more of cameras2720,2730,2740,2750,2760, and2770may correspond to one of image capture devices122,124, and126, as described in greater detail above. In some embodiments, processor2710may correspond to processor110described above. Alternatively or additionally, processor2710may be a separate processor. In some embodiments, host vehicle2700may include additional processors associated with one or more cameras. For example, processor2722may be associated with camera2720and processor2732may be associated with camera2730. These camera processors may be associated with cameras in various ways. In some embodiments processor2722may be integrated into the same housing as camera2720and thus may be dedicated to camera2720. Alternatively or additionally, processor2722may be separate from camera2720but may still be dedicated to camera2720. In some embodiments, processor2722may be associated with multiple cameras. For example, processor2722could be configured to receive images from cameras2720and2730, perform processing on the images (e.g., inputting the images into first and second models) and transmitting a result of the processing (e.g., signature encodings) to processor2710. Alternatively or additionally, each camera may have a dedicated processor, as shown inFIG.27. While separate processors are not shown for cameras2740,2750,2760, and2770, it is to be understood that host vehicle2700may similarly include processors associated with these cameras. Further, the positions of cameras2720,2730,2740,2750,2760, and2770are provided by way of example, and the disclosed embodiments may be used in a wide variety of camera positions. To determine a position of objects relative to host vehicle, images may be captured from two or more of cameras2720,2730,2740,2750,2760, and2770.FIG.28illustrates an example environment2800for determining positions of objects using images from multiple cameras, consistent with the disclosed embodiments. For example, environment2800may include a pedestrian2810within the vicinity of host vehicle2700. While pedestrian2810is used by way of example, it is to be understood that the disclosed embodiments may be used to determine position information for a wide variety of objects, including vehicles, lane markings, road signs, highway exit ramps, traffic lights, road obstacles or hazards (e.g., road debris, etc.), and any other feature associated with an environment of a vehicle. Consistent with the disclosed embodiments, pedestrian2810may appear in images captured from multiple cameras of host vehicle2700. For example, camera2720may be associated with a field of view2820and camera2730may be associated with a field of view2830. A shown inFIG.28, pedestrian2810may be included in both field of view2820and field of view2830. Accordingly, images captured by camera2720and camera2730may both include representations of pedestrian2810. Through analysis of these captured images, a position relative to host vehicle2700may be determined. In some embodiments, the position of pedestrian2810may be determined through the implementation of multiple trained models, as described above.FIG.29illustrates an example process2900for determining a position of pedestrian2810, consistent with the disclosed embodiments. Images2920and2930may represent images captured using cameras2720and2730, respectively. For example, image2920may have been captured from field of view2820and image2930may have been captured from field of view2830, as described above. Accordingly, images2920and2930may include representations of pedestrian2810, as shown. Process2900may include inputting images2920and2930(or at least portions thereof) into trained models2940and2950, respectively. Trained model2940may be configured to generate a signature encoding2942based on image2920, and trained model2950may be configured to generate a signature encoding2952based on image2930, as shown inFIG.29. In some embodiments, a portion2922of image2920may be extracted and input to trained model2940. For example, portion2922may be a portion of image2920including a representation of pedestrian2810. Accordingly, processor2722(or another processor of host vehicle2700) may detect pedestrian2810within image2920and may crop portion2922from image2920. Similarly, processor2732(or another processor of host vehicle2700) may detect pedestrian2810within image2930and may crop a portion2932(including a representation of pedestrian2810) from image2930. In some embodiments, portions2922and2932may have a canonical shape or size. For example, portions2922and2932may have a predetermined number of pixels (e.g., 64×16 pixels, or any other suitable image size). In some embodiments, information mapping images2920and2930to a reference coordinate system may also be input into trained models2940and2950. For example, process2900may include inputting information2924mapping image2920to a reference coordinate system into trained model2940and inputting information2934mapping image2930to the reference coordinate system into trained model2950. This information may be any form of information to correlate pixels within images2920and2930to a reference coordinate system (i.e., a rectified stereo pair location). For example, information2924and2934may include portions of lookup tables (LUTs) for cameras2720and2730that correspond to portions2922and2932, respectively. For example, camera2720may be associated with a lookup table that includes information to translate each pixel of image2920to a location within a reference coordinate system. As another example, the information may include an array of coefficients, such as coefficients defining a rotation, translation, scale, skew, or other alterations to an image portion to align it with a reference coordinate system. FIG.30illustrates example lookup tables3020and3030associated with cameras2720and2730, consistent with the disclosed embodiments. Lookup table3020may be a set of information to transform pixels from image2920to a warped position3022within a reference image frame3000. In other words, for each pixel within image2920, lookup table3020may include information indicating a image coordinates for the pixel in reference image frame3000. Similarly, lookup table3030may be a set of information to transform pixels from image2930to a warped position3032within a reference image frame3000. Accordingly, based on lookup tables3020and3030, representations of corresponding objects in images2920and2930may be aligned in reference image frame3000. Lookup tables3020and3030may be generated based on known positions of cameras2720and2730. In some embodiments, the lookup tables may account for various calibration factors for cameras2720and2730. For example, this may include intrinsic calibration parameters, such as a focal length, an optical center, or a skew coefficient for a camera, extrinsic calibration parameters, such as a rotation or translation of an image. While lookup tables3020and3030for mapping images from cameras2720and2730to a reference image frame3000are shown by way of example, various other lookup tables may be generated for other cameras and/or other reference image frames. For example, camera2740may be associated with a corresponding lookup table to map images captured using camera2740to reference image frame3000. As another example, camera2720may be associated with an additional lookup table for mapping image2920to a reference frame associated with cameras2720and2770. Accordingly, different lookup tables may be used depending on which cameras of host vehicle2700an object is visible in. Alternatively or additionally, all cameras of host vehicle2700may be mapped to the same reference image frame (e.g., as a panoramic image, a 360 degree image, etc.). Returning toFIG.29, information2924and2934may include portions of lookup tables3020and3030. For example, information2924and2934may include the portion of lookup tables3020and3030corresponding to the pixels in image portions2922and2932, respectively. Based on image portion2922(and, in some embodiments, information2924), trained model2940may generate a signature encoding2942. Signature encoding2942may include any form of information generated by trained model2940to represent image portion2922. In some embodiments, signature encoding2942may be a string of alphanumeric characters of a predetermined length (e.g., 128 characters), an array of floating point numbers represented by 32-bit integers, or any other suitable format. A similar encoding2952may be generated by trained model2950based on image portion2932(and, in some embodiments, information2934). Trained models2940and2950may include any form of machine learning model trained to generate signature encodings based on images. For example, trained models2940and2950may include convolutional neural networks comprising a series of convolutional layers. As one example, trained models2940and2950may include a series of convolutional layers (some having stride1and some having stride2), each followed by rectified linear unit (ReLU) activation functions and fully-connected layers. Various other training or machine learning algorithms may be used, including a logistic regression, a linear regression, a regression, a random forest, a K-Nearest Neighbor (KNN) model, a K-Means model, a decision tree, a cox proportional hazards regression model, a Naïve Bayes model, a Support Vector Machines (SVM) model, a gradient boosting algorithm, or any other form of machine learning model or algorithm. Signature encodings2942and2952may then be input into another trained model2960. Trained model2960may be configured to generate position information2962, which may be an indicator of the location of pedestrian2810. Accordingly, a format of signature encodings2942and2952may be selected such that they are expressive enough (i.e., having a sufficient length) to encode the information needed for trained model2960to generate position information, while minimizing the data needed to be supplied to trained model2960. Position information2962may be represented in various formats. For example, position information2962may include three-dimensional coordinates, GPS coordinates, a distance from a point on host vehicle2700to pedestrian2810, or various other information that may define a three-dimensional spatial position of pedestrian2810relative to host vehicle2700in environment2800. In some embodiments, signature encodings2942and2952may be combined prior to being fed into trained model2960. For example, signature encodings2942and2952may be stacked (e.g., ordered by left-right of their source images) and fed into trained model2960. In some embodiments, trained model2960may be a convolutional neural network comprising a series of convolutional layers. For example, trained model2960may include a series of fully-connected layers followed by ReLU activation functions. As with trained models2940and2950, various other training or machine learning algorithms may be used, including a logistic regression, a linear regression, a regression, a random forest, a K-Nearest Neighbor (KNN) model, a K-Means model, a decision tree, a cox proportional hazards regression model, a Naïve Bayes model, a Support Vector Machines (SVM) model, a gradient boosting algorithm, or any other form of machine learning model or algorithm. In some embodiments, trained models2940,2950, and2960may be trained together (e.g., based on the same or similar training data). For example, trained models2940,2950, and2960may be trained using a common labeled set of training data. An example training data set may include known 3D positions of various objects (e.g., represented in a three-dimensional coordinate system relative to a vehicle, or other forms of position information as described above) and corresponding image detections of the object in stereo pair images (similar to image portions2922and2932). The training data set is not necessarily limited to stereo pair images captured from the same set of cameras. For example, the training data set may include stereo image pairs captured using cameras2730and2740, cameras2740and2750, cameras2750and2760, cameras2760and2770, cameras2770and2720, or any other pair of cameras having a substantial overlap in field of view (similar to the overlap between field of view2820and field of view2830described above) where the relative calibration between the cameras is known. Further, the training data set is not necessarily limited to data captured by a single vehicle, and may include data captured by multiple vehicles. The training image detections may be input into trained models2940and2950and the output from trained model2960may be compared to the labeled position of the object in the training data to determine a loss. Through the training process, weights, bias, and/or other variables of models2940,2950, and2960may be adjusted to minimize this loss. As a result, trained models2940,2950, and2960may be configured to generate position information2962based on image portions2922and2932, as described above. Training of trained models2940,2950, and2960may occur at the same time, substantially the same time, or at different times. In some embodiments, a single processor (or processing unit) may perform all of process2900. Alternatively or additionally, portions of process2900may be split among multiple processors. For example, processor2722may receive image2920captured by camera2720and may extract portion2922from image2920and information2924from lookup table3020. Processor2722may then input portion2922and information2924into trained model2940and receive signature encoding2942and transmit it to processor2710. Similarly, processor2732may receive image2930captured by camera2730, extract portion2932from image2930, extract information2934from lookup table3030, input portion2932and information2934into trained model2940, receive signature encoding2952, and transmit signature encoding2952to processor2710. Processor2710may then input signature encodings2942and2952into trained model2960to generate position information2962. As described above, signature encodings2942and2952may encode the necessary or relevant information from image portions2922and2932and the corresponding information2924and2934such that model2960can infer a desired geometric output. Accordingly, rather than transmitting images from multiple cameras to processor2710, relatively small signature encodings (e.g., as compared to the original or cropped images) may be transmitted, which may significantly reduce data transmission bandwidth and demands on memory and processing at processor2710. For example, a signature encoding represented as 128 characters may require significantly less data than images2920or2930(or cropped portions thereof). Even relative to a cropped portion of 64×64 pixels (4096 pixels total) of images2920or2930, assuming the cropped image pixels were represented in bytes signature encoding of 128 characters were represented in floats (4 bytes), the signature encoding would still be 8 times smaller than the cropped image (4096 bytes for the cropped image compared to 512 bytes for the signature encoding). This data footprint is even greater when full images2920or2930would otherwise be transmitted to a central processor. Further, in some embodiments, because only a portion of images2920and2930are input into trained models2940and2950, generating signature encodings2942and2952may also be efficient in terms of processing requirements for processors2722and2732. This may improve the overall processing speed and performance of host vehicle2700. For example, host vehicle2700may determine various navigation actions based on the position of pedestrian2810and accordingly, any improvements in the speed and efficiency at which the position is determined may improve the safety and other aspects of host vehicle2700. FIG.31is a flowchart showing an example process3100for navigating a host vehicle, consistent with the disclosed embodiments. Process3100may include techniques for determining a 3D position of one or more objects in order to determine navigation actions for the host vehicle. Process3100may be performed by at least one processing device of a vehicle, such as processing devices2710,2722, and2732, as described above. It is to be understood that throughout the present disclosure, the term “processor” is used as a shorthand for “at least one processor.” In other words, a processor may include one or more structures that perform logic operations whether such structures are collocated, connected, or dispersed. In some embodiments, a non-transitory computer readable medium may contain instructions that when executed by a processor cause the processor to perform process3100. Further, process3100is not necessarily limited to the steps shown inFIG.31, and any steps or processes of the various embodiments described throughout the present disclosure may also be included in process3100, including those described above with respect toFIGS.27,28,29, and30. In step3110, process3100includes receiving a first image acquired by a first camera onboard the host vehicle. For example, step3110may include receiving image2920acquired by camera2720. The first image may have been acquired from an environment of the host vehicle. For example, the first image may have been acquired from field-of-view2820within environment2800. In some embodiments, the first camera and the second camera may be located in different positions relative to a reference point of the host vehicle. In step3120, process3100includes receiving a second image acquired by a second camera onboard the host vehicle. For example, step3120may include receiving image2930acquired by camera2730. The second image may have been acquired from the environment of the host vehicle. For example, the second image may have been acquired from field-of-view2830within environment2800. In step3130, process3100includes identifying a first representation of an object in the first image and a second representation of the object in the second image. For example, this may include identifying representations of pedestrian2810in images2920and2930, as described above. In some embodiments, this may include the application of various computer vision algorithms for recognizing objects, including various techniques described herein. While a pedestrian is used by way of example, the object may include a vehicle, debris on a road, or various other objects as described herein. In step3140, process3100includes inputting to a first trained model at least a portion of the first image. For example, this may include inputting portion2922of image2920into trained model2940, as described above. Accordingly, the at least a portion of the first image may include at least a portion of the first representation of the object. In some embodiments, the at least a portion of the first image may include a first bounding box. The first trained model may be configured to determine a first signature encoding using at least the first representation of the object. For example, trained model2940may be configured to generate signature encoding2942, as described above. In some embodiments, determining the first signature encoding may further include inputting to the first trained model information mapping the first image to a reference coordinate system. For example, this may include inputting information2924, as shown inFIG.29. The information mapping the first image to the reference coordinate system may include at least a portion of a lookup table associated with the first camera, such as lookup table3020. As described above, the lookup table may be generated based on at least one calibration parameter of the first camera, such as a focal length, an optical center, a skew coefficient, a rotation, a translation, or various other parameters that may be relevant for defining the relative fields-of-view of different cameras. In step3150, process3100includes inputting to a second trained model at least a portion of the second image. For example, this may include inputting portion2932of image2930into trained model2950, as described above. Accordingly, the at least a portion of the second image may include at least a portion of the second representation of the object. In some embodiments, the at least a portion of the second image may include a second bounding box. The second trained model may be configured to determine a second signature encoding using at least the second representation of the object. For example, trained model2950may be configured to generate signature encoding2952, as described above. In some embodiments, determining the second signature encoding may further include inputting to the second trained model information mapping the second image to a reference coordinate system. For example, this may include inputting information2934, as shown inFIG.29. The information mapping the second image to the reference coordinate system may include at least a portion of a lookup table associated with the second camera, such as lookup table3030. In some embodiments, the first trained model and the second trained model each include a neural network. In step3160, process3100includes receiving the first signature encoding determined by the first trained model. For example, this may include receiving signature encoding2942from trained model2940, as described above. Step3160may also include receiving the second signature encoding determined by the second trained model. For example, this may include receiving signature encoding2952from trained model2950, as described above. In step3170, process3100includes inputting to a third trained model the first signature encoding and the second signature encoding. For example, this may include inputting signature encoding2942and signature encoding2952into trained model2960, as described above. The third trained model may be configured to determine a location of the object within the environment of the host vehicle based on at least the first signature encoding and the second signature encoding. For example, the third trained model may be configured to determine a location of pedestrian2810within environment2700based on signature encoding2942and signature encoding2952. In step3180, process3100may include receiving an indicator of the location of the object determined by the third trained model. For example, this may include receiving position information2962, as described above. The indicator of location may have various formats. In some embodiments, the indicator of the location of the object may include three-dimensional coordinates. As another example, the indicator of the location of the object may include global positioning system (GPS) coordinates. In embodiments where information mapping the first and second images to a reference coordinate system are input into the first and second models, the location of the object within the environment of the host vehicle may be determined relative to the reference coordinate system. Process3100may further include translating the location of the object within the environment of the host vehicle from the reference coordinate system to a coordinate system of the host vehicle. In some embodiments, process3100may further include causing the at least one processor to output the indicator of the location of the object. The host vehicle may be configured to implement one or more navigational actions based on the indicator of the location of the object. For example, the one or more navigational actions may include at least one of steering, braking, or accelerating. As described above, process3100may be performed by multiple processors of host vehicle2700. For example, the at least one processor may include at least one first processor associated with the first camera, at least one second processor associated with the second camera, and at least one third processor. The at least one first processor may be configured to perform: the receiving of the first image acquired by the first camera (step3110); the identifying of the first representation of the object in the first image (step3130); the inputting of the at least a portion of the first image to the first trained model (step3140); and the receiving of the first signature encoding determined by the first trained model (step3160). The at least one second processor may be configured to perform: the receiving of the second image acquired by the second camera (step3120); the identifying of the second representation of the object in the second image (step3130); the inputting of the at least a portion of the second image to the second trained model (step3150); and the receiving of the second signature encoding determined by the second trained model (step3160). The at least one third processor may be configured to perform: the inputting of the first signature encoding and the second signature encoding to the third trained model (step3170); and the receiving of the indicator of location of the object determined by the third trained model (step3180). The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer-readable media, such as secondary storage devices, for example, hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, 4K Ultra HD Blu-ray, or other optical drive media. Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. The various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. | 288,792 |
11858505 | DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention, and limitation is not made to an invention that requires a combination of all features described in the embodiments. Two or more of the multiple features described in the embodiments may be combined as appropriate. Furthermore, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted. Hereinafter, embodiments of the present invention will be described with reference to the attached drawings. In various embodiments, the same reference numerals are given to the same configurations, and redundant description thereof is omitted. In addition, the embodiments can be changed and combined as appropriate. FIG.1is a block diagram of a vehicle1according to an embodiment of the present disclosure.FIG.1schematically shows the vehicle1in a plan view and a side view. The vehicle1is a sedan-type four-wheel passenger car, for example. The vehicle1may be such a four-wheel vehicle, or may also be a two-wheeler or another type of vehicle. The vehicle1includes a vehicle control apparatus2(hereinafter, simply referred to as “control apparatus2”) that controls the vehicle1. The control apparatus2includes a plurality of ECUs20to29communicably connected by an in-vehicle network. Each of the ECUs includes a processor represented by a CPU, a memory such as a semiconductor memory, an interface to an external device, and the like. The memory stores programs that are executed by the processor, data that is used for processing by the processor, and the like. Each ECU may also include a plurality of processors, a plurality of memories, a plurality of interfaces, and the like. For example, the ECU20includes a processor20aand a memory20b. As a result of the processor20aexecuting an instruction included in a program stored in the memory20b, processing of the ECU20is executed. In place of this, the ECU20may also include a dedicated integrated circuit for executing the processing of the ECU20such as an ASIC. The same applies to other ECUs. Functions respectively assigned to the ECUs20to29, and the like will be described below. Note that the number of ECUs and the assigned functions can be designed as appropriate, and they can be broken into smaller pieces than this embodiment, or can be integrated. The ECU20executes control related to automated travelling of the vehicle1. In automated driving, at least one of steering and acceleration/deceleration of the vehicle1is automatically controlled. Automated travelling that is performed by the ECU20may include automated travelling (may also be referred to as “automated driving”) that does not require a driver's travelling operation and automated travelling (may also be referred to as “driving assist”) for assisting a driver's travelling operation. The ECU21controls an electronic power steering apparatus3. The electronic power steering apparatus3includes a mechanism for steering front wheels according to a driver's driving operation (steering operation) on a steering wheel31. The electronic power steering apparatus3also includes a motor that exerts drive force for assisting a steering operation or automatically steering the front wheels, a sensor that detects a steering angle, and the like. When the driving state of the vehicle1is an automated driving state, the ECU21automatically controls the electronic power steering apparatus3according to an instruction from the ECU20, and controls the direction of forward movement of the vehicle1. The ECUs22and23control detection units41to43that detect the situation surrounding the vehicle, and perform information processing on their detection results. Each detection unit41is a camera for shooting an image ahead of the vehicle1(which may hereinafter be referred to as “camera41”), and, in this embodiment, is installed at a roof front part on an interior side of the front window of the vehicle1. By analyzing an image shot by a camera41, it is possible to extract the contour of a target object and a demarcation line (white line, for example) of a traffic lane on a road. Each detection unit42is a LIDAR (Light Detection and Ranging, may hereinafter be referred to as “LIDAR42”), detects a target object in the surroundings of the vehicle1, and measures the distance from the target object. In this embodiment, five LIDARs42are provided, two of the five LIDARs42being provided at the respective front corners of the vehicle1, one at the rear center, and two on the respective sides at the rear. Each detection unit43is a millimeter-wave radar (which may hereinafter be referred to as “radar43”), detects a target object in the surroundings of the vehicle1, and measures the distance from the target object. In this embodiment, five radars43are provided, one of the radars43being provided at the front center of the vehicle1, two at the respective front corners, and two at the rear corners. The ECU22controls one camera41and the LIDARs42, and performs information processing on their detection results. The ECU23controls the other camera41and the radars43, and performs information processing on their detection results. By providing two sets of apparatuses that detect the surrounding situation of the vehicle, the reliability of detection results can be improved, and by providing detection units of different types such as cameras, LIDARs, radars, and sonars, the surrounding environment of the vehicle can be multilaterally analyzed. The ECU24controls a gyro sensor5, a GPS sensor24b, and a communication apparatus24c, and performs information processing on their detection results or communication results. The gyro sensor5detects rotary movement of the vehicle1. A course of the vehicle1can be determined based on a detection result of the gyro sensor5, a wheel speed, and the like. The GPS sensor24bdetects the current position of the vehicle1. The communication apparatus24cwirelessly communicates with a server that provides map information and traffic information, and acquires such information. The ECU24can access a database24aof map information built in a memory, and the ECU24searches for a route from the current location to a destination, and the like. The ECU24, the map database24a, and the GPS sensor24bconstitute a so-called navigation apparatus. The ECU25includes a communication apparatus25afor inter-vehicle communication. The communication apparatus25awirelessly communicates with another vehicle in the surroundings thereof, and exchanges information with the vehicle. The ECU26controls a power plant6. The power plant6is a mechanism for outputting drive force for rotating the drive wheels of the vehicle1, and includes an engine and a transmission, for example. For example, the ECU26controls output of the engine in accordance with a driver's driving operation (an accelerator operation or an accelerating operation) detected by an operation detection sensor7aprovided on an accelerator pedal7A, and switches the gear stage of the transmission based on information regarding the vehicle speed or the like detected by a vehicle speed sensor7c. When the driving state of the vehicle1is an automated driving state, the ECU26automatically controls the power plant6in accordance with an instruction from the ECU20, and controls the acceleration/deceleration of the vehicle1. The ECU27controls lighting devices (lights such as headlights and taillights) that include direction indicators8(blinkers). In the example inFIG.1, direction indicators8are provided on door mirrors, at the front, and at the rear of the vehicle1. The ECU28controls an input/output apparatus9. The input/output apparatus9outputs information to the driver, and receives information input by the driver. An audio output apparatus91notifies the driver of information using sound. A display apparatus92notifies the driver of information through image display. The display apparatus92is installed in front of the driver's seat, for example, and constitutes an instrument panel, or the like. Note that, here, sound and display are illustrated, but information may be notified using vibration and light. In addition, information may also be notified using a combination of some of sound, display, vibration, and light. Furthermore, the combination or a notification aspect may be different according to the level of information to be notified (for example, an emergency level). An input apparatus93is a group of switches that is disposed at a position where the driver can operate the switches and gives instructions to the vehicle1, but a sound input apparatus may also be included. The ECU29controls a brake apparatus10and a parking brake (not illustrated). The brake apparatus10is, for example, a disk brake apparatus, is provided for each of the wheels of the vehicle1, and decelerates or stops the vehicle1by imposing resistance to rotation of the wheels. The ECU29controls activation of the brake apparatus10, for example, in accordance with a driver's driving operation (brake operation) detected by an operation detection sensor7bprovided on a brake pedal7B. When the driving state of the vehicle1is an automated driving state, the ECU29automatically controls the brake apparatus10in accordance with an instruction from the ECU20, and controls deceleration and stop of the vehicle1. The brake apparatus10and the parking brake can also be activated to maintain a stopped state of the vehicle1. In addition, if the transmission of the power plant6includes a parking lock mechanism, this can also be activated in order to maintain a stopped state of the vehicle1. A collision avoidance function that can be executed by the control apparatus2of the vehicle1will be described with reference toFIG.2. Assume that, as shown inFIG.2, the vehicle1is about to enter an intersection201. The vehicle1can detect an object included in a detection region202L, using the detection unit43(the radar43) mounted on the front left side of the vehicle1. Also, the vehicle1can detect an object included in the detection region202R, using the detection unit43(the radar43) on the front right side of the vehicle1. When it is detected that an object is included in the detection region202L or202R, the control apparatus2determines whether or not there is the possibility that this object will collide with the vehicle1. For example, the control apparatus2may determine that there is the possibility that the detected object will collide with the vehicle1if the object moves in a direction intersecting a longer direction1aof the vehicle1. The control apparatus2may also determine the possibility of collision further based on the speed of the vehicle1and the speed of the object. The longer direction1aof the vehicle1may also be referred to as the front-and-rear direction of the vehicle1. For example, assume that, in the example inFIG.2, a vehicle203is also travelling toward the intersection201. The control apparatus2of the vehicle1detects that the vehicle203is included in the detection region202L. Since a longer direction203aof the vehicle203intersects the longer direction1aof the vehicle1, the control apparatus2determines that there is the possibility that the vehicle1will collide with the vehicle203. If it is determined that there is the possibility that the vehicle1will collide with another object, the control apparatus2executes an operation for avoiding collision with the vehicle203(hereinafter, referred to as a “collision avoidance operation”). Specifically, the control apparatus2may alert the driver that there is the possibility of colliding with the vehicle203, using the display apparatus92, as the collision avoidance operation. Alternatively or in addition, the control apparatus2may decrease the speed of the vehicle1by causing the brake apparatus10to operate. When alerting the driver on the possibility of collision, the control apparatus2may also present, to the driver, the position of the detected object (for example, right or left) and the type of the detected object (for example, a vehicle, a person, a bicycle). In the example inFIG.2, the vehicle203is used as an example of an object that is to be avoided. Alternatively, an object that is to be avoided may be another object such as a person or a bicycle. In the example inFIG.2, the object included in the detection region202L is detected by the radar43. Alternatively, the object included in the detection region202L may also be detected using a LIDAR or a camera, or any combination of a LIDAR, a camera and a radar. The same apples to the detection region202R A collision avoidance operation in a case where the vehicle1is about to be parked in a parking space will be described with reference toFIGS.3A and3B. Assume that the vehicle1is about to be parked in a parking space301. Specifically, the vehicle1moves backward at a low speed toward the parking space301, and moves forward as necessary for a multi-point turn. This operation may be performed through manual driving by the driver, or may also be performed by an automatic parking function of the vehicle1. An angle between the longer direction1aof the vehicle1and a longer direction301aof the parking space301is denoted by an angle302. The longer direction301aof the parking space301is a direction that matches the longer direction1aof the vehicle1in the case where the vehicle1is parked in the parking space301by being moved in a straight line. If a demarcation line between the parking space301and adjacent parking space is present, the longer direction301aof the parking space301may is parallel to this demarcation line. As shown inFIG.3A, in the case where the angle302is large, even if there is a vehicle303approaching the vehicle1, the vehicle303is not included in the detection region202L nor202R. Therefore, even if the control apparatus2executes the collision avoidance function, it is not possible to correctly determine the possibility of colliding with the vehicle303. On the other hand, as shown inFIG.3B, in the case where the angle302is small, the vehicle303is included in the detection region202L, and thus the control apparatus2can correctly determine the possibility of colliding with the vehicle303. In view of this, in some embodiments of the present disclosure, a switch is made between activation and inactivation of the collision avoidance function based on the magnitude of the angle302. Accordingly, it is possible to reduce the occurrence of false detection in the collision avoidance function, and to reduce the computation load of the control apparatus2. In the above-described example, a case has been described in which the longer direction301aof the parking space301(for example, orthogonally) intersects a direction in which the other vehicle303proceeds, but the present disclosure is applicable even if these directions are parallel as in parallel parking. Next, an example of a method in which the control apparatus2controls the vehicle1will be described with reference toFIG.4. The method inFIG.4is processed, for example, as a result of the processor20aof the ECU20executing an instruction of a program stored in the memory20bof the ECU20. Alternatively, a configuration may also be adopted in which dedicated hardware (for example, a circuit) execute the steps of the method. This method is started in accordance with the vehicle1starting moving. In step S401, the control apparatus2determines whether or not the vehicle1is about to be parked in a parking space. If the vehicle1is about to be parked in a parking space (YES in step S401), the control apparatus2advances the procedure to step S402, and otherwise (NO in step S401) repeats step S401. In this manner, in step S401, the control apparatus2waits before starting an operation for parking the vehicle1in the parking space. The following steps S402to S405are executed while the vehicle1is about to be parked in parking space. The control apparatus2may determine whether or not the vehicle1is about to be parked in a parking space, based on detection results of the detection units41to43. Specifically, the control apparatus2may determine that the vehicle1is about to be parked in a parking space, based on a fact that there is a parking space in the vicinity of the vehicle1and the vehicle1have started to move (forward or backward) to approach the parking space. Alternatively, the control apparatus2may also determine that the vehicle1is about to be parked in a parking space, based on receiving, from the driver, an instruction to start the automatic parking function. In step S402, the control apparatus2determines whether or not an angle between the longer direction1aof the vehicle1and the longer direction of the parking space is larger than a threshold value. If the formed angle is larger than threshold value (YES in step S402), the control apparatus2advances the procedure to step S406, and otherwise (NO in step S402) advances the procedure to step S403. In this example, if the formed angle is equal to the threshold value, the procedure advances to step S403, but, instead, the procedure may advance to step S406. The threshold value that is used in step S402may be determined based on the sizes of the detection region202L and202R, and be stored in the memory20bin advance. The threshold value may be 20°, 30°, or 45°, for example. In step S403, the control apparatus2determines whether or not the vehicle1is moving away from the parking space for a multi-point turn. If the vehicle1is moving away from the parking space (YES in step S403), the control apparatus2advances the procedure to step S404, and otherwise (NO in step S403) advances the procedure to step S406. When the vehicle1has temporarily stopped, the control apparatus2may advance the procedure to either step S404or step S406. It is conceivable that, when the vehicle1is moving to approach a parking space, the likelihood of the vehicle1colliding with an object approaching the vehicle1is low. On the other hand, when the vehicle1is moving away from a parking space, the likelihood of the vehicle1colliding with an object approaching the vehicle1increases. In particular, in the case where the angle between the longer direction1aof the vehicle1and the longer direction of the parking space is smaller than the threshold value, and as shown inFIG.3B, more than half of the vehicle1has entered the parking space301, there is the possibility the vehicle303will travel to pass through in front of the vehicle1. In view of this, in some embodiments, a switch is made between activation and inactivation of the collision avoidance function based on the direction in which the vehicle1proceeds. Note that, in other embodiments, step S403may be omitted In step S404, the control apparatus2activates the collision avoidance function. In step S406, the control apparatus2inactivates the collision avoidance function. If the collision avoidance function is active, the control apparatus2executes processing for detecting an object on the front left and front right of the vehicle1, as described with reference toFIG.2, and the collision avoidance operation that is based on the likelihood of colliding with the detected object. If the collision avoidance function is inactive, the control apparatus2does not execute such an operation. In step S405, the control apparatus2determines whether or not parking is complete. If parking is complete (YES in step S405), the control apparatus2ends the procedure, and otherwise (NO in step S405) advances the procedure to step S401. If the procedure returns to step S401, the control apparatus2determines whether or not the vehicle1is still about to be parked. If the vehicle1is no longer about to be parked, and is moving to another location, the control apparatus2repeats step S401. Embodiment Overview Item 1. A control apparatus (2) of a vehicle (1), the apparatus comprising: a parking determination unit configured to determine whether or not the vehicle is about to be parked in a parking space (301) (step S401); anda collision avoidance unit configured to be able to execute an avoidance function for avoiding collision with an object (203) that is moving in a direction (203a) intersecting a longer direction of the vehicle,wherein, while the vehicle is about to be parked in the parking space,in a case where an angle between the longer direction of the vehicle and a longer direction (301a) of the parking space is larger than a threshold value, the collision avoidance unit inactivates the avoidance function (steps S402and S406), andin a case where the angle between the longer direction of the vehicle and the longer direction of the parking space is smaller than the threshold value, the collision avoidance unit activates the avoidance function (steps S402and S404). According to this embodiment, the collision avoidance function can be executed in appropriate cases. As a result, the processing load of the control apparatus reduces, and the occurrence of false detection can be reduced. Item 2. The control apparatus according to Item 1, wherein, while the vehicle is about to be parked in the parking space and the angle between the longer direction of the vehicle and the longer direction of the parking space is smaller than the threshold value,in a case where the vehicle is moving to approach the parking space, the collision avoidance unit inactivates the avoidance function (steps S403and S406), andin a case where the vehicle is moving away from the parking space for a multi-point turn, the collision avoidance unit activates the avoidance function (steps S403and S404). According to this embodiment, the collision avoidance function can be executed in more appropriate cases. Item 3. The control apparatus according to Item 1 or 2, wherein the collision avoidance unit executes the avoidance function based on a detection result of a sensor (43) installed on a front lateral side of the vehicle. According to this embodiment, the collision avoidance function can be executed in more appropriate cases. Item 4. A vehicle (1) that includes the control apparatus (2) according to any one of Items 1 to 3. According to this embodiment, a vehicle that has the above-described advantages is provided. The invention is not limited to the foregoing embodiments, and various variations/changes are possible within the spirit of the invention. | 22,532 |
11858506 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS FIG.1shows one exemplary embodiment of method100. In step110, a time series11athrough11cof physical observations of surroundings11of ego vehicle not delineated inFIG.1, together with pieces of information12athat have been received via wireless interface12, are processed. These pieces of information12aoriginate from extraneous objects2through4in vehicle surroundings11itself, and/or from an infrastructure5. In step110, extraneous objects2through4are identified, i.e., it is established that three extraneous objects2through4are present, which move in different ways. Extraneous objects2through4are classified in optional step115according to types2dthrough4d. In step120, each of proximate destinations2bthrough4btracked by extraneous objects2through4is predicted and the basic rules2cthrough4care ascertained, according to which the movement of extraneous objects2through4occurs. Similarly, it is ascertained in step130toward which proximate destination1bthe movement of ego vehicle1is headed and according to which basic rules1cthis movement occurs. In step140, respective quality function R1-4is established for ego vehicle1as well as for extraneous objects2through4on the basis of the existing pieces of information, the respective type2dthrough4dof extraneous object2through4capable of being used according to optional substep141, if this type has been determined in optional step115. In step150, quality functions R1-4are expanded to include quality measures Q1-4, which also include expected value E(P(x′)) of a distribution of probabilities P(x′) of state changes x′, and to that extent, also couples quality measures Q1-4among one another. In this case, quality measures Q1-4are selected according to substep151, whose optima with respect to movement strategies π1-4are provided by the Bellman optimum. According to substep152, a Boltzmann-Gibbs distribution is selected as the distribution of probabilities P(x′) of state changes x′. In step160, those movement strategies π1-4of the ego vehicle and of extraneous objects2through4are ascertained, which maximize quality measures Q1-4. Ascertained from this in step170are finally the searched trajectories2athrough4aof extraneous objects2through4as well as setpoint trajectory1aof ego vehicle1adapted thereto. FIG.2shows one exemplary embodiment of method200. Steps210,215,220and230are identical to steps110,115,120and130of method100. In contrast to step140, no complete quality function R1-4is determined in step240of method200, rather feature functions F1-4, which are parameterized with a set θ1-4of parameters still free and only in connection with these parameters θ1-4form complete quality functions R1-4. Types2dthrough4dof extraneous objects2through4, provided they have been determined in step215, may be used in optional substep241for selecting respective feature functions F2-4. In step250, movement strategies π1-4of the ego vehicle and of the extraneous objects are ascertained as those strategies that maximize the maximal causal entropy. At the same time, parameters θ1-4of feature functions F1-4are also determined. In this case, a boundary condition is predefined according to substep251, which enables a recursive determination of movement strategies π1-4. In step260, similar to step170of method100, searched trajectories2athrough4aof extraneous objects2through4as well as setpoint trajectory1aof ego vehicle1are ascertained from movement strategies π1-4. FIG.3shows one exemplary embodiment of method300. In step310, setpoint trajectory1afor ego vehicle1adapted to the behavior of extraneous objects2through4in surroundings11of ego vehicle1is ascertained using method100or200. This adapted trajectory1ais conveyed in step320to movement planner13of ego vehicle1. In step330, an activation program13afor a drive system24, a steering system15and/or a braking system16of ego vehicle1is ascertained with the aid of movement planner13. In this context, the term trajectory in general relates to a path in combined space and time coordinates. This means that a trajectory may be changed not only by a change of the movement direction, but also by a change of velocity such as, for example, a deceleration, waiting, and a subsequent restarting. In step340, drive system14, steering system15or braking system16is activated according to activation program13a. FIG.4shows a complex traffic scene, in which described methods100,200,300may be advantageously used. Ego vehicle1is driving straight ahead on the right-hand traffic lane of a road50in the direction of proximate destination1b. First extraneous object2is a further vehicle, whose turn signal2eindicates that its driver intends to turn into side road51leading to proximate destination2bof vehicle2. Second extraneous object3is a further vehicle which, from the perspective of ego vehicle1, is en route in the direction of its proximate destination3bon the oncoming lane of road50. Third extraneous object4is a pedestrian who, from his/her perspective, is heading toward an proximate destination on the opposite side of road50. In the situation depicted inFIG.4, pedestrian4must use crossing52across road50, which also obligates the driver of vehicle3to wait. Thus, the driver of vehicle2may, in principle, immediately accelerate and turn left as intended, which would be optimal for him/her to quickly reach proximate destination2b. Accordingly, ego vehicle1would have clear sailing in his/her lane at least up to crossing52. A control method under the simplifying assumption that the driver of vehicle2will do the optimum for him/herself, would thus accelerate ego vehicle1. If, however, the driver of vehicle2erroneously assesses the situation to the effect that he/she must first allow vehicle3in the oncoming traffic to pass (which would also be correct of course without pedestrian4on crossing52), the ego vehicle then collides with vehicle2from behind. The example methods according to the present invention make it possible to take such uncertainties into consideration. Thus, for example, the velocity for the continuation of travel may be limited to such an extent that in the event vehicle2actually stops, a collision even with a full brake application may be avoided. | 6,263 |
11858507 | The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through the use of the accompanying drawings. Any dimensions disclosed in the drawings or elsewhere herein are for the purpose of illustration only. DETAILED DESCRIPTION Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments can take various and alternative forms. The figures are not necessarily to scale; some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present disclosure. As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures can be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations. Certain terminology may be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as “above” and “below” refer to directions in the drawings to which reference is made. Terms such as “front,” “back,” “left,” “right,” “rear,” and “side” describe the orientation and/or location of portions of the components or elements within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the components or elements under discussion. Moreover, terms such as “first,” “second,” “third,” and so on may be used to describe separate components. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import. FIG.1schematically illustrates an operating environment that includes a mobile vehicle communication and control system10for a motor vehicle12. The communication and control system10for the vehicle12generally includes one or more wireless carrier systems60, a land communications network62, a computer64, a mobile device57such as a smart phone, and a remote access center78. The vehicle12, shown schematically inFIG.1, is depicted in the illustrated embodiment as a passenger car, but it should be appreciated that any other vehicle including motorcycles, trucks, sport utility vehicles (SUVs), recreational vehicles (RVs), marine vessels, aircraft, etc., can also be used. The vehicle12includes a propulsion system13, which may in various embodiments include an internal combustion engine, an electric machine such as a traction motor, and/or a fuel cell propulsion system. The vehicle12also includes a transmission14configured to transmit power from the propulsion system13to a plurality of vehicle wheels15according to selectable speed ratios. According to various embodiments, the transmission14may include a step-ratio automatic transmission, a continuously-variable transmission, or other appropriate transmission. The vehicle12additionally includes wheel brakes17configured to provide braking torque to the vehicle wheels15. The wheel brakes17may, in various embodiments, include friction brakes, a regenerative braking system such as an electric machine, and/or other appropriate braking systems. The vehicle12additionally includes a steering system16. While depicted as including a steering wheel for illustrative purposes, in some embodiments contemplated within the scope of the present disclosure, the steering system16may not include a steering wheel. The vehicle12includes a wireless communications system28configured to wirelessly communicate with other vehicles (“V2V”) and/or infrastructure (“V2I”). In an exemplary embodiment, the wireless communications system28is configured to communicate via a dedicated short-range communications (DSRC) channel. DSRC channels refer to one-way or two-way short-range to medium-range wireless communication channels specifically designed for automotive use and a corresponding set of protocols and standards. However, wireless communications systems configured to communicate via additional or alternate wireless communications standards, such as IEEE 802.11 and cellular data communication, are also considered within the scope of the present disclosure. The propulsion system13, transmission14, steering system16, and wheel brakes17are in communication with or under the control of at least one controller22. While depicted as a single unit for illustrative purposes, the controller22may additionally include one or more other controllers, collectively referred to as a “controller.” The controller22may include a microprocessor or central processing unit (CPU) in communication with various types of computer readable storage devices or media. Computer readable storage devices or media may include volatile and nonvolatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the CPU is powered down. Computer-readable storage devices or media may be implemented using any of a number of known memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or any other electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable instructions, used by the controller22in controlling the vehicle. The controller22includes an automated driving system (ADS)24for automatically controlling various actuators in the vehicle. In an exemplary embodiment, the ADS24is a so-called Level Four or Level Five automation system. A Level Four system indicates “high automation”, referring to the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene. A Level Five system indicates “full automation”, referring to the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver. In an exemplary embodiment, the ADS24is configured to control the propulsion system13, transmission14, steering system16, and wheel brakes17to control vehicle acceleration, steering, and braking, respectively, without human intervention via a plurality of actuators30in response to inputs from a plurality of sensors26, which may include GPS, RADAR, LIDAR, optical cameras, thermal cameras, ultrasonic sensors, and/or additional sensors as appropriate. FIG.1illustrates several networked devices that can communicate with the wireless communications system28of the vehicle12. One of the networked devices that can communicate with the vehicle12via the wireless communications system28is the mobile device57. The mobile device57can include computer processing capability, a transceiver capable of communicating using a short-range wireless protocol, and a visual smart phone display59. The computer processing capability includes a microprocessor in the form of a programmable device that includes one or more instructions stored in an internal memory structure and applied to receive binary input to create binary output. In some embodiments, the mobile device57includes a GPS module capable of receiving GPS satellite signals and generating GPS coordinates based on those signals. In other embodiments, the mobile device57includes cellular communications functionality such that the mobile device57carries out voice and/or data communications over the wireless carrier system60using one or more cellular communications protocols, as are discussed herein. The visual smart phone display59may also include a touch-screen graphical user interface. The wireless carrier system60is preferably a cellular telephone system that includes a plurality of cell towers70(only one shown), one or more mobile switching centers (MSCs)72, as well as any other networking components required to connect the wireless carrier system60with the land communications network62. Each cell tower70includes sending and receiving antennas and a base station, with the base stations from different cell towers being connected to the MSC72either directly or via intermediary equipment such as a base station controller. The wireless carrier system60can implement any suitable communications technology, including for example, analog technologies such as AMPS, or digital technologies such as CDMA (e.g., CDMA2000) or GSM/GPRS. Other cell tower/base station/MSC arrangements are possible and could be used with the wireless carrier system60. For example, the base station and cell tower could be co-located at the same site or they could be remotely located from one another, each base station could be responsible for a single cell tower or a single base station could service various cell towers, or various base stations could be coupled to a single MSC, to name but a few of the possible arrangements. Apart from using the wireless carrier system60, a second wireless carrier system in the form of satellite communication can be used to provide uni-directional or bi-directional communication with the vehicle12. This can be done using one or more communication satellites66and an uplink transmitting station67. Uni-directional communication can include, for example, satellite radio services, wherein programming content (news, music, etc.) is received by the transmitting station67, packaged for upload, and then sent to the satellite66, which broadcasts the programming to subscribers. Bi-directional communication can include, for example, satellite telephony services using the satellite66to relay telephone communications between the vehicle12and the station67. The satellite telephony can be utilized either in addition to or in lieu of the wireless carrier system60. The land network62may be a conventional land-based telecommunications network connected to one or more landline telephones and connects the wireless carrier system60to the remote access center78. For example, the land network62may include a public switched telephone network (PSTN) such as that used to provide hardwired telephony, packet-switched data communications, and the Internet infrastructure. One or more segments of the land network62could be implemented through the use of a standard wired network, a fiber or other optical network, a cable network, power lines, other wireless networks such as wireless local area networks (WLANs), or networks providing broadband wireless access (BWA), or any combination thereof. Furthermore, the remote access center78need not be connected via land network62but could include wireless telephony equipment so that it can communicate directly with a wireless network, such as the wireless carrier system60. While shown inFIG.1as a single device, the computer64may include a number of computers accessible via a private or public network such as the Internet. Each computer64can be used for one or more purposes. In an exemplary embodiment, the computer64may be configured as a web server accessible by the vehicle12via the wireless communications system28and the wireless carrier60. Other computers64can include, for example: a service center computer where diagnostic information and other vehicle data can be uploaded from the vehicle via the wireless communications system28or a third party repository to or from which vehicle data or other information is provided, whether by communicating with the vehicle12, the remote access center78, the mobile device57, or some combination of these. The computer64can maintain a searchable database and database management system that permits entry, removal, and modification of data as well as the receipt of requests to locate data within the database. The computer64can also be used for providing Internet connectivity such as DNS services or as a network address server that uses DHCP or other suitable protocol to assign an IP address to the vehicle12. The computer64may be in communication with at least one supplemental vehicle in addition to the vehicle12. The vehicle12and any supplemental vehicles may be collectively referred to as a fleet. As shown inFIG.2, the ADS24includes multiple distinct control systems, including at least a perception system32for determining the presence, location, classification, and path of detected features or objects in the vicinity of the vehicle. The perception system32is configured to receive inputs from a variety of sensors, such as the sensors26illustrated inFIG.1, and synthesize and process the sensor inputs to generate parameters used as inputs for other control algorithms of the ADS24. The perception system32includes a sensor fusion and preprocessing module34that processes and synthesizes sensor data27from the variety of sensors26. The sensor fusion and preprocessing module34performs calibration of the sensor data27, including, but not limited to, LIDAR to LIDAR calibration, camera to LIDAR calibration, LIDAR to chassis calibration, and LIDAR beam intensity calibration. The sensor fusion and preprocessing module34outputs preprocessed sensor output35. A classification and segmentation module36receives the preprocessed sensor output35and performs object classification, image classification, traffic light classification, object segmentation, ground segmentation, and object tracking processes. Object classification includes, but is not limited to, identifying and classifying objects in the surrounding environment including identification and classification of traffic signals and signs, RADAR fusion and tracking to account for the sensor's placement and field of view (FOV), and false positive rejection via LIDAR fusion to eliminate the many false positives that exist in an urban environment, such as, for example, manhole covers, bridges, overhead trees or light poles, and other obstacles with a high RADAR cross section but which do not affect the ability of the vehicle to travel along its path. Additional object classification and tracking processes performed by the classification and segmentation module36include, but are not limited to, freespace detection and high level tracking that fuses data from RADAR tracks, LIDAR segmentation, LIDAR classification, image classification, object shape fit models, semantic information, motion prediction, raster maps, static obstacle maps, and other sources to produce high quality object tracks. The classification and segmentation module36additionally performs traffic control device classification and traffic control device fusion with lane association and traffic control device behavior models. The classification and segmentation module36generates an object classification and segmentation output37that includes object identification information. A localization and mapping module40uses the object classification and segmentation output37to calculate parameters including, but not limited to, estimates of the position and orientation of vehicle12in both typical and challenging driving scenarios. These challenging driving scenarios include, but are not limited to, dynamic environments with many cars (e.g., dense traffic), environments with large scale obstructions (e.g., roadwork or construction sites), hills, multi-lane roads, single lane roads, a variety of road markings and buildings or lack thereof (e.g., residential vs. business districts), and bridges and overpasses (both above and below a current road segment of the vehicle). The localization and mapping module40also incorporates new data collected as a result of expanded map areas obtained via onboard mapping functions performed by the vehicle12during operation and mapping data “pushed” to the vehicle12via the wireless communications system28. The localization and mapping module40updates previous map data with the new information (e.g., new lane markings, new building structures, addition or removal of constructions zones, etc.) while leaving unaffected map regions unmodified. Examples of map data that may be generated or updated include, but are not limited to, yield line categorization, lane boundary generation, lane connection, classification of minor and major roads, classification of left and right turns, and intersection lane creation. The localization and mapping module40generates a localization and mapping output41that includes the position and orientation of the vehicle12with respect to detected obstacles and road features. A vehicle odometry module46receives data27from the vehicle sensors26and generates a vehicle odometry output47which includes, for example, vehicle heading and velocity information. An absolute positioning module42receives the localization and mapping output41and the vehicle odometry information47and generates a vehicle location output43that is used in separate calculations as discussed below. An object prediction module38uses the object classification and segmentation output37to generate parameters including, but not limited to, a location of a detected obstacle relative to the vehicle, a predicted path of the detected obstacle relative to the vehicle, and a location and orientation of traffic lanes relative to the vehicle. Data on the predicted path of objects (including pedestrians, surrounding vehicles, and other moving objects) is output as an object prediction output39and is used in separate calculations as discussed below. The ADS24also includes an observation module44and an interpretation module48. The observation module44generates an observation output45received by the interpretation module48. The observation module44and the interpretation module48allow access by the remote access center78. The interpretation module48generates an interpreted output49that includes additional input provided by the remote access center78, if any. A path planning module50processes and synthesizes the object prediction output39, the interpreted output49, and additional routing information79received from an online database or the remote access center78to determine a vehicle path to be followed to maintain the vehicle on the desired route while obeying traffic laws and avoiding any detected obstacles. The path planning module50employs algorithms configured to avoid any detected obstacles in the vicinity of the vehicle, maintain the vehicle in a current traffic lane, and maintain the vehicle on the desired route. The path planning module50outputs the vehicle path information as path planning output51. The path planning output51includes a commanded vehicle path based on the vehicle route, vehicle location relative to the route, location and orientation of traffic lanes, and the presence and path of any detected obstacles. A first control module52processes and synthesizes the path planning output51and the vehicle location output43to generate a first control output53. The first control module52also incorporates the routing information79provided by the remote access center78in the case of a remote take-over mode of operation of the vehicle. A vehicle control module54receives the first control output53as well as velocity and heading information47received from vehicle odometry46and generates vehicle control output55. The vehicle control output55includes a set of actuator commands to achieve the commanded path from the vehicle control module54, including, but not limited to, a steering command, a shift command, a throttle command, and a brake command. The vehicle control output55is communicated to actuators30. In an exemplary embodiment, the actuators30include a steering control, a shifter control, a throttle control, and a brake control. The steering control may, for example, control a steering system16as illustrated inFIG.1. The shifter control may, for example, control a transmission14as illustrated inFIG.1. The throttle control may, for example, control a propulsion system13as illustrated inFIG.1. The brake control may, for example, control wheel brakes17as illustrated inFIG.1. The present disclosure describes methods and systems to generate an event structure to represent and understand situations occurring in the environment surrounding an autonomous vehicle as it travels on roadways. By analyzing and prioritizing environmental attention and behavioral attention, multiple events can be abstracted such that one representation causes the same reaction for the corresponding autonomous entities. Conventional event-describing structures use geometrical zone-based categorizations or entity-based categorizations without appropriate abstraction and generalization, thus requiring vast amounts of storage to represent large numbers of events. Therefore, the present disclosure addresses the need for representation of large numbers of events, as experience in normal operating (that is, driving) conditions. The methods and systems disclosed herein make use of a new prioritized attention-based event structure to enable effective situation awareness. Several of the advantages of the methods and systems disclosed herein include, for example and without limitation, human perception-inspired event generation for effective situation awareness, a hierarchical structure that includes attention zones and behavioral attentions, and a risk level analysis to determine prioritized events/obstacles/vehicles/pedestrians/etc. within each attention zone. Additionally, the methods and systems disclosed herein include an urgent attention zone for dealing with anomalous events/obstacles/vehicles/pedestrians/etc. that pose an immediate concern for autonomous operation of the vehicle12. Finally, the methods and systems disclosed herein effectively compress traffic situational information for efficient data processing. Using the methods and systems disclosed herein, information acquired from the sensors of an autonomous vehicle, such as the vehicle12, can be applied to perceive, reason, and understand surrounding situations with a more human-like capability and with less computational complexity without losing crucial details of the events and surroundings, leading to improved navigation decisions by the ADS24. Appropriate situation awareness is particularly useful for autonomous driving not only to enable safe operation of the vehicle12but also to understand the surrounding environment and make appropriate navigational and vehicle control decisions. While it may be desirable to use and store many kinds of information during the autonomous driving decision processes performed by the ADS24, for practical reasons, input data to the ADS24should be efficiently represented, stored, and used. Therefore, the ADS24should utilize methods and systems that are well-designed for both efficiency and sufficiency of decision-making. The methods and systems disclosed herein assess adjacent situations surrounding the vehicle12for urgency and threat to the vehicle's current and projected path of travel. Focusing on the immediate surroundings by combining zone attention and behavior attention and assigning weights to entities within various attention zones allows the ADS24to deal with multiple neighboring entities and complicated scenarios. FIG.3illustrates a high-level diagram of a method100to generate cognitive situation awareness using an attention-based event structure, according to an embodiment. The method100can be utilized in connection with the vehicle12and the various modules of the ADS24discussed herein. The method100can be utilized in connection with the controller22as discussed herein, or by other systems associated with or separate from the vehicle, in accordance with exemplary embodiments. The order of operation of the method100is not limited to the sequential execution as illustrated inFIG.3, but may be performed in one or more varying orders, or steps may be performed simultaneously, as applicable in accordance with the present disclosure. At102, the ADS24receives perception inputs from the sensors26of the vehicle12. In various embodiments, the perception inputs include sensor data from the variety of sensors including GPS, RADAR, LIDAR, optical cameras, thermal cameras, ultrasonic sensors, and/or additional sensors as appropriate. The perception inputs includes data on the surrounding environment as well as data on the vehicle characteristics including speed, braking, projected path of travel, etc., for example and without limitation. In various embodiments, the perception input is sensor data relative to external features, such as other vehicles, objects, pedestrians, etc. in a vicinity of the vehicle12. In various embodiments, the perception inputs are received from the sensors26by the perception system32of the ADS24. The various modules of the ADS24process the sensor data and deliver the data, in the form of tokens, to a cognitive situation awareness module, as shown at104. In some embodiments, the cognitive situation awareness module is a module of the ADS24and works in combination with the localization and mapping module40to estimate the position of the vehicle12in both typical and challenging driving scenarios. Additionally, the cognitive situation awareness module works in combination with the object prediction module38of the ADS24to further classify and generate parameters related to a location of a detected obstacle relative to the vehicle12, a predicted path of the detected obstacle relative to the vehicle12, and a location and orientation of traffic lanes relative to the vehicle12. As discussed in greater detail herein, the cognitive situation awareness functions include zone attention assignments to any detected obstacles or entities in the environment surrounding the vehicle12, behavior attention estimations for the detected obstacles or entities, risk level analysis of the detected obstacles or entities and identification of any anomalous entities or behavior, and re-assignment of zone attention assignments for any anomalous entities or behavior. The cognitive situation awareness functions generate a corresponding hierarchical event structure, as shown at114. The hierarchical event structure is illustrated in greater detail inFIG.4and discussed in greater detail below. The event structure information is then used to develop behavior planning, as shown at116. The behavior planning may be performed by the object prediction module38of the ADS24, or by another module of the ADS24. The decision behavior, typically in the form of a trajectory, is generated from the behavior planning and is synthesized with the other information used by the path planning module50to generate a vehicle path to be followed to maintain the vehicle on the desired route while obeying traffic laws and avoiding any detected obstacles. The path planning output51, including the decision behavior, is sent to the vehicle controller, such as the vehicle control module54, as shown at118. As described herein, the vehicle control module54generates one or more control signals or vehicle control output55that are sent to hardware of the vehicle12, such as one or more of the actuators30, to achieve the commanded vehicle path including, but not limited to a steering command, a braking command, and a throttle command. In various embodiments, the method100outlined inFIG.3may be performed by one controller, such as the controller22, or may be distributed across multiple controllers of the vehicle12, depending on the computational load, etc. With continued reference toFIG.3, and more specifically to the cognitive situation awareness step shown at104, once the perception data from the sensors26, vehicle electronic control unit (ECU) systems, and outside-feeding environment information are received by the controller22, zone attention level assignments are made for each of the external entities in the vicinity of the vehicle, depending on the environmental data and the projected path or desired trajectory of the vehicle12, as shown at106. Behavior attention estimations for each of the external entities are performed at108for each zone, considering the relative actions of entities within the assigned zone with respect to the environmental conditions. A risk level analysis for each of the entities is performed at110, based on the assigned zone attention and behavior attention of each entity. The risk level analysis may reveal anomalies, such as unexpected objects, unexpected behavior of the entity, and/or urgent attention zones. If needed, at112, the attention and behavior zones of each entity are reassigned based on any detected anomalies. The analyzed and prioritized environmental attention and behavioral attention data generated in the cognitive situation awareness step104for each entity is stored as a hierarchical event structure at114. A hierarchical event structure124, according to an embodiment, is shown inFIG.4. The highest level of the event structure124includes header information130, an urgent attention zone132, a high attention zone134, a low attention zone136, and a no attention zone138. If anomalous objects or unusual activities occur in the environment surrounding the vehicle12, that object or activity entity is listed in the urgent attention zone132. Each of the zones are listed in order of decreasing priority, that is, entities classified in the urgent attention zone132receive the highest priority, entities classified in the high attention zone134receive the next highest priority, and so on. Entities within each zone are assigned a risk level. As shown inFIG.4, the entities140,142within the urgent attention zone132are assigned risk levels. The entities within each zone are ordered by risk level, with entities having the highest risk level ordered higher than entities having a lower risk level. Similarly, the entities144,146are classified within the high attention zone134and are assigned risk levels and ordered appropriately. Additionally, the entities148,150are classified within the low attention zone136and are ordered according to their assigned risk level. Environmental considerations, such as the known traffic pattern in the area enables selective storage of entities at or above a predetermined risk threshold. Entities that are classified in the no attention zone138are not stored to reduce computational storage requirements. Attention zones in the environment surrounding the vehicle12are determined using various factors including, for example and without limitation, the projected path of the vehicle12(for example, a left turn, a right turn, etc.), the traffic environment (a straight road, an intersection, the number of lanes, the position of the vehicle12on the roadway, etc.), and possible paths that could lead to an impact with an object or other vehicle, according to the road structure (such as areas directly in front of the vehicle12, a merging lane, etc.). Two examples of zone attention assignments for common autonomous driving scenarios are shown inFIGS.5and6.FIG.5illustrates the vehicle12approaching an intersection with the intent of making a right turn. The area of the intersection itself is classified as a high attention zone134. Other high attention zones134include the lanes of travel both ahead of and behind the vehicle12once the vehicle12has made the right turn in the intersection. Additionally, the lanes of travel in the opposite direction of the intended path of travel of the vehicle12is classified as a high attention zone134. Each of the areas inFIG.5that are classified as high attention zones134are areas that the vehicle12intends to enter during the projected path of travel and/or areas where other vehicles or pedestrians could interfere with the projected path of the vehicle12. Additionally, areas in which other vehicles have the right of way are also classified as high attention zones134. With continued reference toFIG.5, the area of the intersection directly opposite the vehicle12is designated as a low attention zone136. Areas that are classified as low attention zones136are areas that other vehicles or pedestrians may be present, but the probability that vehicles or pedestrians in these areas will interfere with the projected path of travel of the vehicle12is lower than in an area classified as a high attention zone134. As shown inFIG.5, two areas are classified as no attention zones138. The no attention zones138are areas in which other vehicles, objects, and/or pedestrians may be present, but are classified as not likely to interfere with the projected path of travel of the vehicle12, unless these other vehicles, objects, and/or pedestrians exhibit abnormal behaviors. Abnormal behaviors include, for example and without limitation, a vehicle leaving an expected lane of travel or a pedestrian crossing a street outside of a designated crossing area. Another example of zone attention assignments is shown inFIG.6. In this example, the vehicle12is traveling along a roadway having multiple lanes of travel in each direction. The areas directly in front of the vehicle12and the lane of travel in the opposite direction immediately to the left of the vehicle12are classified as high attention zones134. The areas behind the vehicle12and immediately to the right of the vehicle12are classified as low attention zones136. Finally, the lanes of travel going in the opposite direction of the vehicle12that are behind the vehicle12(that is, vehicles within these lanes of travel have already passed the vehicle12and the vehicle12is moving away from these vehicles) and opposite lanes of travel that are separated by at least one lane of travel from the vehicle12are classified as no attention zones138. As discussed herein, vehicles, objects, and/or pedestrians within any zone, and in particular in the no attention zones138, can be classified as urgent attention zones or objects, if the sensors of the vehicle12detect unexpected behaviors that could interfere with the projected path of the vehicle12. Urgent attention zones are assigned when the risk value is estimated. In various embodiments, the attention zones shown inFIGS.5and6are based on the amalgamation of attention zone types assigned to cells or zone elements in the environment surrounding the vehicle12. These cells or zone elements are shown in the left panel ofFIG.7, with the merged attention zones illustrated in the right panel ofFIG.7. In various embodiments, the information used to allocate the zone assignments is obtained from two sources: a priori map data from the navigation system, such as the GPS of the vehicle12, and perception data from the sensors26of the vehicle12. As noted herein, the controller22completes accurate and robust environment-to-map correspondences and perceptions outputs via the various modules of the ADS24. Once zone attentions are assigned to the neighboring zone elements (such as other vehicles, objects, obstacles, pedestrians, etc.), the elements of the same zone attention levels are merged as shown in the right panel ofFIG.7. Each neighboring vehicle, object, or pedestrian, xi, in a zone has its own zone attention level value, LZA(xi) assigned by the corresponding attention zone. In one example, high attention zones assign a zone attention level value LZA(xi)=0.8 to the external entities, including vehicles, objects, or pedestrians, within the high attention zone134, low attention zones assign a zone attention level value LZA(xi)=0.4 to the vehicles, objects, or pedestrians within the low attention zone136, and no attention zones assign a zero zone attention level value to the vehicles, objects, or pedestrians within the low attention zone138. In another example, a zone attention level value is calculated as: LZA(xi)=Sxi(Z+αC(xi)) Where Z is the baseline zone attention level value {0, 0.4, 0.8} for no, low, and high attention zones138,136,134, respectively; C(xi) is the computation of complexity for the external entity, that is the vehicle, object, or pedestrian; and Sxiis a sigmoid function. Each attention zone can contain multiple entities or agents, each of which has its own behavior attention level value assigned based various factors. These factors include, but are not limited to, the entity's position in relation with the corresponding lane of the road or path of travel of the vehicle12, the velocity of the entity in relation with the desired speed of the vehicle12, and the heading angle of the entity in relation with the corresponding lane of the road or path of travel of the vehicle12. To obtain the behavior attention level value, these factors are combined using one of the following exemplary methods. In one method, the kinematic information of the entity is used. The behavior attention level value assigned to the entity depends on the relative location, velocity, and heading of the entity. The relativity is determined by the difference between the entity's actual behavior from the entity's expected behavior (that is, the differences between the entity's actual location, velocity, and heading from the entity's expected location, velocity, and heading). For an autonomous vehicle, such as the vehicle12, the actual path of the entity should align with the expected entity trajectory. In various embodiments, the behavior attention level value is obtained from the following equation: LBA(xi)=ƒBA1(pxi−pD,vxi−vD,hxi−hD) Where (pxi, vxi, hxi) is the position, velocity, and heading angle of the corresponding entity xi, pDis the desired position of the entity within the road lane when the entity is not intending to make a lane change, vDis the desired velocity of the entity relative to the speed limit, and hDis the desired heading angle of the entity within the road lane when the entity is not intending to make a lane change. In various embodiments, the behavior attention level value is obtained as a weighted summation of sigmoid functions of each component, expressed as: LBA(xi)=α·Sp(pxi−pD)+β·Sv(vxi−vD)+(1−α−β)·Sh(hxi−hD) Where α and β are weights such that (0≤α+β≤1) and Sm(n) is the sigmoid function for the ‘m’ component which converges beyond the minimum and maximum ‘n’ values. When the individual component deviation increases, LBA(xi) also increases, meaning the entity's attention level value is increased. In various embodiments, pre-trained information is used to obtain the behavior attention level value for each entity. Assuming a certain entity x in the environment has m possible trained paths, xi, where i=1 . . . m represents the entity's possible paths. An observation zt-k:tfrom time t−k to t can build a probability P(xi|zt-k:t) for each feasible trained path. In various embodiments, the probability is acquired by likelihood estimation. In normal situations, that is, situations in which the entity does not have any abnormal behavior or issues, at least one expected action should have a higher probability than a threshold probability for unexpected behavior, expressed as pth. In mathematical form, the following relation occurs: ∃xi∈XP(xi|zt-k:t)>pth Alternately, in anomalous situations, that is, situations in which the entity exhibits abnormal behavior or issues, all possible expected actions cannot have a higher probability than pth, and the following relation occurs: ∃xi∈XP(xi|zt-k:t)<pth Therefore, the behavior attention level value is acquired as a function of each entity's expected motion probability and the anomaly threshold probability as shown below: LBA(xi)=ƒBA2(P(xi|zt-k:t),pth) The function ƒBA2(⋅) can be defined as an inverse of the exponential of the expected motion probability as shown inFIG.8. As shown inFIG.8, if the probability becomes smaller than the threshold value, the behavioral attention level value becomes drastically larger (moving from right to left in the illustrated graph). In mathematical form, the behavior attention level value is calculated as: LBA(xi)=LBAmaxifP(xi|zt-k:t)<pmin LBA(xi)=LBAmax·e−α(P(xi|zt-k:t)-pmin)ifpmin<P(xi|zt-k:t)<pth LBA(xi)=LBAth·e−β(P(xi|zt-k:t)-pth)ifP(xi|zt-k:t)<pth Where LBAmaxis the maximum behavior attention level value, LBAthis the behavior attention level value at pth, pminis the probability of achieving LBAmax, α and β are coefficients for the exponents with α>β. Using this mathematical expression, the changing rate of LBA(xi) becomes higher when the probability is smaller than pth. After obtaining the zone attention level value, LZA, and the behavior attention level value, LBA, for each entity, the controller22estimates the risk value of the corresponding entity to the projected path or current location of the vehicle12. In various embodiments, the risk value is estimated as: R(xi)=ƒR(LZA(xi),LBA(xi)) In some embodiments, R(xi) is the multiplication of the two attention level values: R(xi)=LZA(xi)·LBA(xi) Once the risk value for each entity is known, multiple entities within the same attention zone (such as the high attention zone134and the low attention zone136) can be ordered as shown inFIG.4. Only those entities with the risk values higher than a certain risk threshold, rth, are ordered in the hierarchical event structure124. Additionally, if R(xi)>rUA, where rUAis the risk threshold value for an urgent attention zone entity, the entity is added to an existing urgent attention zone132or an urgent attention zone132is created if it does not already exist in the hierarchical event structure124. Entities within the urgent attention zone132are given the highest priority, that is, the ADS24considers these entities most important when determining whether any changes should be made to the projected path of the vehicle12. Risk values for an entity that are greater than rUAare generally, if not always, caused by anomalous traffic situations. When a new event queue is generated for a similar event, the two events are compared in order of zone attention level (that is, the urgent attention zone level is compared, then the high attention zone level, followed by the low attention zone level). Within each zone, the corresponding entities are considered in order of risk values. One benefit of the method100to generate cognitive situation awareness using the hierarchical event structure124is a more efficient use of storage space. For most traffic situations, such as the two examples shown inFIGS.5and6, the number of meaningful entities, that is, entities having a risk value greater than the risk threshold, is approximately 10. The information for each entity, including attention zone type, location or pose relative to the vehicle12, and the risk value requires much less storage space than other methods for evaluating environmental conditions and potential interactions with the vehicle12. Using the cognitive situation awareness, as discussed herein, effectively reduces the amount of traffic situational information for efficient data processing by the controller. It should be emphasized that many variations and modifications may be made to the herein-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. Moreover, any of the steps described herein can be performed simultaneously or in an order different from the steps as ordered herein. Moreover, as should be apparent, the features and attributes of the specific embodiments disclosed herein may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. Moreover, the following terminology may have been used herein. The singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to an item includes reference to one or more items. The term “ones” refers to one, two, or more, and generally applies to the selection of some or all of a quantity. The term “plurality” refers to two or more of an item. The term “about” or “approximately” means that quantities, dimensions, sizes, formulations, parameters, shapes and other characteristics need not be exact, but may be approximated and/or larger or smaller, as desired, reflecting acceptable tolerances, conversion factors, rounding off, measurement error and the like and other factors known to those of skill in the art. The term “substantially” means that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide. A plurality of items may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. Furthermore, where the terms “and” and “or” are used in conjunction with a list of items, they are to be interpreted broadly, in that any one or more of the listed items may be used alone or in combination with other listed items. The term “alternatively” refers to selection of one of two or more alternatives and is not intended to limit the selection to only those listed alternatives or to only one of the listed alternatives at a time, unless the context clearly indicates otherwise. The processes, methods, or algorithms disclosed herein can be deliverable to/implemented by a processing device, controller, or computer, which can include any existing programmable electronic control unit or dedicated electronic control unit. Similarly, the processes, methods, or algorithms can be stored as data and instructions executable by a controller or computer in many forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media. The processes, methods, or algorithms can also be implemented in a software executable object. Alternatively, the processes, methods, or algorithms can be embodied in whole or in part using suitable hardware components, such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components. Such example devices may be onboard as part of a vehicle computing system or be located off-board and conduct remote communication with devices on one or more vehicles. While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes can be made without departing from the spirit and scope of the disclosure. As previously described, the features of various embodiments can be combined to form further exemplary aspects of the present disclosure that may not be explicitly described or illustrated. While various embodiments could have been described as providing advantages or being preferred over other embodiments or prior art implementations with respect to one or more desired characteristics, those of ordinary skill in the art recognize that one or more features or characteristics can be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes can include, but are not limited to cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, etc. As such, embodiments described as less desirable than other embodiments or prior art implementations with respect to one or more characteristics are not outside the scope of the disclosure and can be desirable for particular applications. | 49,996 |
11858508 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. In the drawings, specific arrangements or orderings of schematic elements, such as those representing devices, modules, instruction blocks and data elements, are shown for ease of description. However, it should be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some embodiments. Further, in the drawings, where connecting elements, such as solid or dashed lines or arrows, are used to illustrate a connection, relationship, or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship, or association can exist. In other words, some connections, relationships, or associations between elements are not shown in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element is used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents a communication of signals, data, or instructions, it should be understood by those skilled in the art that such element represents one or multiple signal paths (e.g., a bus), as may be needed, to affect the communication. Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Several features are described hereafter that can each be used independently of one another or with any combination of other features. However, any individual feature may not address any of the problems discussed above or might only address one of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Although headings are provided, information related to a particular heading, but not found in the section having that heading, may also be found elsewhere in this description. Embodiments are described herein according to the following outline:1. General Overview2. System Overview3. Autonomous Vehicle Architecture4. Autonomous Vehicle Inputs5. Autonomous Vehicle Planning6. Autonomous Vehicle Control7. Trajectory Prediction from Precomputed or dynamically generated Probability Map8. Trajectory Prediction from a Lattice of Trajectories9. Multi-modal Trajectory Prediction General Overview This document describes different techniques for predicting how an agent (e.g., a vehicle, bicycle, pedestrian, etc.) will move in an environment based on movement of the surrounding objects. One technique involves generating a probability map. The system receives location data and past trajectory data for objects within a certain distance of the agent. Those objects could have been detected by that agent (e.g., if the agent is a vehicle the objects could have been detected by the sensors of the vehicle). The system determines a set of features from those objects and combines the features in the set with motion data of the agent (e.g., speed acceleration, yaw rate, etc.). The system then generates (e.g., using a neural network) a probability map from the concatenated data set. The probability map includes multiple physical locations (e.g., squares of one meter resolution) such that each physical location is assigned a probability of the agent traversing that physical location. Based on the probability map, the system generates one or more predicted trajectories for the agent. The prediction system can use a neural network that is trained prior to use. During training, the input can include the trajectory that the agent traveled. Although this technique is described as making trajectory prediction for a single agent, the system is able to predict trajectories for multiple or all agents in a particular input set (e.g., a set of location data and past trajectory data). Another technique involves generating a trajectory lattice. The system receives location data and past trajectory data for objects within a certain distance of the agent, determines a set of features from those objects, and combines the features in the set with motion data of an agent (e.g., speed, acceleration, yaw rate, etc.). In an embodiment, the past trajectory data can include past map data including traffic signal data, turn signal data, estimates of attentiveness, brake light indications, agent types, and other suitable past trajectory data. The system then generates (e.g., using a neural network) a trajectory lattice from the concatenated data set. The trajectory lattice includes multiple possible trajectories for the agent and corresponding probabilities. Based on the trajectory lattice, the system generates one or more predicted trajectories for the agent. Although this technique is described as making trajectory prediction for a single agent, the system is able to predict trajectories for multiple or all agents in a particular input set (e.g., a set of location data and past trajectory data). A different technique involves training a classifier (e.g., a neural network) for a multi-modal prediction method. The system receives location data and past trajectory data for objects within a certain distance of the agent, determines a set of features from those objects, and combines those features with motion data of an agent (e.g., speed, acceleration, yaw rate, etc.), as in the above two techniques. However, in this instance the received data is training data that also includes the trajectory that the agent actually traveled which is sometimes referred to as the ground truth. The system then generates (e.g., using a neural network) multiple predicted trajectories with each trajectory having a corresponding probability and calculates an angle between each predicted trajectory and a trajectory that the agent has traveled. The prediction system makes a determination of whether any of the angles are within a threshold (e.g., a threshold angle). Based on determining that none of the angles are within the threshold, the system selects a best trajectory (sometimes referred to as the best mode) using a function. In an embodiment, instead of using angles in this calculation, the system can use a different metric. Thus, the system can calculate, using a metric, a value between each predicted trajectory and a trajectory that the agent has traveled to determine whether each value is within a threshold. For example, the function can cause a random trajectory to be selected (e.g., using a random number generator). In an embodiment, a function can use one or more templates (e.g., template trajectories) for selecting the best trajectory. The templates can be static or dynamically generated based on a current state of the agent (e.g., speed, acceleration, yaw rate, or other suitable state component). The system computes a difference between the best trajectory and the trajectory the agent (e.g., using a multi-modal loss function) and adjusts weights of a model based on the difference (e.g., by minimizing loss over the training data). This process can be repeated for a training set (e.g., thousands of instances of location data and past trajectory data) to develop a model (e.g., a neural network) to predict multiple trajectories for an agent. Although this technique is described as making trajectory prediction for a single agent, the system is able to predict trajectories for multiple or all agents in a particular input set (e.g., a set of location data and past trajectory data). Some of the advantages of these techniques include the ability to predict movement of an agent (e.g., a vehicle, a bicycle, or a pedestrian) and perform motion planning based on that movement. Thus, these techniques make autonomous vehicles safer and more efficient at navigation. System Overview FIG.1shows an example of an autonomous vehicle100having autonomous capability. As used herein, the term “autonomous capability” refers to a function, feature, or facility that enables a vehicle to be partially or fully operated without real-time human intervention, including without limitation fully autonomous vehicles, highly autonomous vehicles, and conditionally autonomous vehicles. As used herein, an autonomous vehicle (AV) is a vehicle that possesses autonomous capability. As used herein, “vehicle” includes means of transportation of goods or people. For example, cars, buses, trains, airplanes, drones, trucks, boats, ships, submersibles, dirigibles, etc. A driverless car is an example of a vehicle. As used herein, “trajectory” refers to a path or route to navigate an AV from a first spatiotemporal location to second spatiotemporal location. In an embodiment, the first spatiotemporal location is referred to as the initial or starting location and the second spatiotemporal location is referred to as the destination, final location, goal, goal position, or goal location. In some examples, a trajectory is made up of one or more segments (e.g., sections of road) and each segment is made up of one or more blocks (e.g., portions of a lane or intersection). In an embodiment, the spatiotemporal locations correspond to real world locations. For example, the spatiotemporal locations are pick up or drop-off locations to pick up or drop-off persons or goods. As used herein, “sensor(s)” includes one or more hardware components that detect information about the environment surrounding the sensor. Some of the hardware components can include sensing components (e.g., image sensors, biometric sensors), transmitting and/or receiving components (e.g., laser or radio frequency wave transmitters and receivers), electronic components such as analog-to-digital converters, a data storage device (such as a RAM and/or a nonvolatile storage), software or firmware components and data processing components such as an ASIC (application-specific integrated circuit), a microprocessor and/or a microcontroller. As used herein, a “scene description” is a data structure (e.g., list) or data stream that includes one or more classified or labeled objects detected by one or more sensors on the AV vehicle or provided by a source external to the AV. As used herein, a “road” is a physical area that can be traversed by a vehicle, and may correspond to a named thoroughfare (e.g., city street, interstate freeway, etc.) or may correspond to an unnamed thoroughfare (e.g., a driveway in a house or office building, a section of a parking lot, a section of a vacant lot, a dirt path in a rural area, etc.). Because some vehicles (e.g., 4-wheel-drive pickup trucks, sport utility vehicles, etc.) are capable of traversing a variety of physical areas not specifically adapted for vehicle travel, a “road” may be a physical area not formally defined as a thoroughfare by any municipality or other governmental or administrative body. As used herein, a “lane” is a portion of a road that can be traversed by a vehicle. A lane is sometimes identified based on lane markings. For example, a lane may correspond to most or all of the space between lane markings, or may correspond to only some (e.g., less than 50%) of the space between lane markings. For example, a road having lane markings spaced far apart might accommodate two or more vehicles between the markings, such that one vehicle can pass the other without traversing the lane markings, and thus could be interpreted as having a lane narrower than the space between the lane markings, or having two lanes between the lane markings. A lane could also be interpreted in the absence of lane markings. For example, a lane may be defined based on physical features of an environment, e.g., rocks and trees along a thoroughfare in a rural area or, e.g., natural obstructions to be avoided in an undeveloped area. A lane could also be interpreted independent of lane markings or physical features. For example, a lane could be interpreted based on an arbitrary path free of obstructions in an area that otherwise lacks features that would be interpreted as lane boundaries. In an example scenario, an AV could interpret a lane through an obstruction-free portion of a field or empty lot. In another example scenario, an AV could interpret a lane through a wide (e.g., wide enough for two or more lanes) road that does not have lane markings. In this scenario, the AV could communicate information about the lane to other AVs so that the other AVs can use the same lane information to coordinate path planning among themselves. The term “over-the-air (OTA) client” includes any AV, or any electronic device (e.g., computer, controller, IoT device, electronic control unit (ECU)) that is embedded in, coupled to, or in communication with an AV. The term “over-the-air (OTA) update” means any update, change, deletion or addition to software, firmware, data or configuration settings, or any combination thereof, that is delivered to an OTA client using proprietary and/or standardized wireless communications technology, including but not limited to: cellular mobile communications (e.g., 2G, 3G, 4G, 5G), radio wireless area networks (e.g., WiFi) and/or satellite Internet. The term “edge node” means one or more edge devices coupled to a network that provide a portal for communication with AVs and can communicate with other edge nodes and a cloud based computing platform, for scheduling and delivering OTA updates to OTA clients. The term “edge device” means a device that implements an edge node and provides a physical wireless access point (AP) into enterprise or service provider (e.g., VERIZON, AT&T) core networks. Examples of edge devices include but are not limited to: computers, controllers, transmitters, routers, routing switches, integrated access devices (IADs), multiplexers, metropolitan area network (MAN) and wide area network (WAN) access devices. “One or more” includes a function being performed by one element, a function being performed by more than one element, e.g., in a distributed fashion, several functions being performed by one element, several functions being performed by several elements, or any combination of the above. It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this description, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. As used herein, an AV system refers to the AV along with the array of hardware, software, stored data, and data generated in real-time that supports the operation of the AV. In an embodiment, the AV system is incorporated within the AV. In an embodiment, the AV system is spread across several locations. For example, some of the software of the AV system is implemented on a cloud computing environment similar to cloud computing environment300described below with respect toFIG.3. In general, this document describes technologies applicable to any vehicles that have one or more autonomous capabilities including fully autonomous vehicles, highly autonomous vehicles, and conditionally autonomous vehicles, such as so-called Level 5, Level 4 and Level 3 vehicles, respectively (see SAE International's standard J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, which is incorporated by reference in its entirety, for more details on the classification of levels of autonomy in vehicles). The technologies described in this document are also applicable to partially autonomous vehicles and driver assisted vehicles, such as so-called Level 2 and Level 1 vehicles (see SAE International's standard J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems). In an embodiment, one or more of the Level 1, 2, 3, 4 and 5 vehicle systems may automate certain vehicle operations (e.g., steering, braking, and using maps) under certain operating conditions based on processing of sensor inputs. The technologies described in this document can benefit vehicles in any levels, ranging from fully autonomous vehicles to human-operated vehicles. Autonomous vehicles have advantages over vehicles that require a human driver. One advantage is safety. For example, in 2016, the United States experienced 6 million automobile accidents, 2.4 million injuries, 40,000 fatalities, and 13 million vehicles in crashes, estimated at a societal cost of $910+ billion. U.S. traffic fatalities per 100 million miles traveled have been reduced from about six to about one from 1965 to 2015, in part due to additional safety measures deployed in vehicles. For example, an additional half second of warning that a crash is about to occur is believed to mitigate 60% of front-to-rear crashes. However, passive safety features (e.g., seat belts, airbags) have likely reached their limit in improving this number. Thus, active safety measures, such as automated control of a vehicle, are the likely next step in improving these statistics. Because human drivers are believed to be responsible for a critical pre-crash event in 95% of crashes, automated driving systems are likely to achieve better safety outcomes, e.g., by reliably recognizing and avoiding critical situations better than humans; making better decisions, obeying traffic laws, and predicting future events better than humans; and reliably controlling a vehicle better than a human. Referring toFIG.1, an AV system120operates the AV100along a trajectory198through an environment190to a destination199(sometimes referred to as a final location) while avoiding objects (e.g., natural obstructions191, vehicles193, pedestrians192, cyclists, and other obstacles) and obeying rules of the road (e.g., rules of operation or driving preferences). In an embodiment, the AV system120includes devices101that are instrumented to receive and act on operational commands from the computer processors146. We use the term “operational command” to mean an executable instruction (or set of instructions) that causes a vehicle to perform an action (e.g., a driving maneuver). Operational commands can, without limitation, including instructions for a vehicle to start moving forward, stop moving forward, start moving backward, stop moving backward, accelerate, decelerate, perform a left turn, and perform a right turn. In an embodiment, computing processors146are similar to the processor304described below in reference toFIG.3. Examples of devices101include a steering control102, brakes103, gears, accelerator pedal or other acceleration control mechanisms, windshield wipers, side-door locks, window controls, and turn-indicators. In an embodiment, the AV system120includes sensors121for measuring or inferring properties of state or condition of the AV100, such as the AV's position, linear and angular velocity and acceleration, and heading (e.g., an orientation of the leading end of AV100). Example of sensors121are GPS, inertial measurement units (IMU) that measure both vehicle linear accelerations and angular rates, wheel speed sensors for measuring or estimating wheel slip ratios, wheel brake pressure or braking torque sensors, engine torque or wheel torque sensors, and steering angle and angular rate sensors. In an embodiment, the sensors121also include sensors for sensing or measuring properties of the AV's environment. For example, monocular or stereo video cameras122in the visible light, infrared or thermal (or both) spectra, LiDAR123, RADAR, ultrasonic sensors, time-of-flight (TOF) depth sensors, speed sensors, temperature sensors, humidity sensors, and precipitation sensors. In an embodiment, the AV system120includes a data storage unit142and memory144for storing machine instructions associated with computer processors146or data collected by sensors121. In an embodiment, the data storage unit142is similar to the ROM308or storage device310described below in relation toFIG.3. In an embodiment, memory144is similar to the main memory306described below. In an embodiment, the data storage unit142and memory144store historical, real-time, and/or predictive information about the environment190. In an embodiment, the stored information includes maps, driving performance, traffic congestion updates or weather conditions. In an embodiment, data relating to the environment190is transmitted to the AV100via a communications channel from a remotely located database134. In an embodiment, the AV system120includes communications devices140for communicating measured or inferred properties of other vehicles' states and conditions, such as positions, linear and angular velocities, linear and angular accelerations, and linear and angular headings to the AV100. These devices include Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication devices and devices for wireless communications over point-to-point or ad hoc networks or both. In an embodiment, the communications devices140communicate across the electromagnetic spectrum (including radio and optical communications) or other media (e.g., air and acoustic media). A combination of Vehicle-to-Vehicle (V2V) Vehicle-to-Infrastructure (V2I) communication (and, in some embodiments, one or more other types of communication) is sometimes referred to as Vehicle-to-Everything (V2X) communication. V2X communication typically conforms to one or more communications standards for communication with, between, and among autonomous vehicles. In an embodiment, the communication devices140include communication interfaces. For example, wired, wireless, WiMAX, Wi-Fi, Bluetooth, satellite, cellular, optical, near field, infrared, or radio interfaces. The communication interfaces transmit data from a remotely located database134to AV system120. In an embodiment, the remotely located database134is embedded in a cloud computing environment200as described inFIG.2. The communication interfaces140transmit data collected from sensors121or other data related to the operation of AV100to the remotely located database134. In an embodiment, communication interfaces140transmit information that relates to teleoperations to the AV100. In some embodiments, the AV100communicates with other remote (e.g., “cloud”) servers136. In an embodiment, the remotely located database134also stores and transmits digital data (e.g., storing data such as road and street locations). Such data is stored on the memory144on the AV100, or transmitted to the AV100via a communications channel from the remotely located database134. In an embodiment, the remotely located database134stores and transmits historical information about driving properties (e.g., speed and acceleration profiles) of vehicles that have previously traveled along trajectory198at similar times of day. In an embodiment, such data may be stored on the memory144on the AV100, or transmitted to the AV100via a communications channel from the remotely located database134. Computing devices146located on the AV100algorithmically generate control actions based on both real-time sensor data and prior information, allowing the AV system120to execute its autonomous driving capabilities. In an embodiment, the AV system120includes computer peripherals132coupled to computing devices146for providing information and alerts to, and receiving input from, a user (e.g., an occupant or a remote user) of the AV100. In an embodiment, peripherals132are similar to the display312, input device314, and cursor controller316discussed below in reference toFIG.3. The coupling is wireless or wired. Any two or more of the interface devices may be integrated into a single device. In an embodiment, the AV system120receives and enforces a privacy level of a passenger, e.g., specified by the passenger or stored in a profile associated with the passenger. The privacy level of the passenger determines how particular information associated with the passenger (e.g., passenger comfort data, biometric data, etc.) is permitted to be used, stored in the passenger profile, and/or stored on the cloud server136and associated with the passenger profile. In an embodiment, the privacy level specifies particular information associated with a passenger that is deleted once the ride is completed. In an embodiment, the privacy level specifies particular information associated with a passenger and identifies one or more entities that are authorized to access the information. Examples of specified entities that are authorized to access information can include other AVs, third party AV systems, or any entity that could potentially access the information. A privacy level of a passenger can be specified at one or more levels of granularity. In an embodiment, a privacy level identifies specific information to be stored or shared. In an embodiment, the privacy level applies to all the information associated with the passenger such that the passenger can specify that none of her personal information is stored or shared. Specification of the entities that are permitted to access particular information can also be specified at various levels of granularity. Various sets of entities that are permitted to access particular information can include, for example, other AVs, cloud servers136, specific third party AV systems, etc. In an embodiment, the AV system120or the cloud server136determines if certain information associated with a passenger can be accessed by the AV100or another entity. For example, a third-party AV system that attempts to access passenger input related to a particular spatiotemporal location must obtain authorization, e.g., from the AV system120or the cloud server136, to access the information associated with the passenger. For example, the AV system120uses the passenger's specified privacy level to determine whether the passenger input related to the spatiotemporal location can be presented to the third-party AV system, the AV100, or to another AV. This enables the passenger's privacy level to specify which other entities are allowed to receive data about the passenger's actions or other data associated with the passenger. FIG.2illustrates an example “cloud” computing environment. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services). In typical cloud computing systems, one or more large cloud data centers house the machines used to deliver the services provided by the cloud. Referring now toFIG.2, the cloud computing environment200includes cloud data centers204a,204b, and204cthat are interconnected through the cloud202. Data centers204a,204b, and204cprovide cloud computing services to computer systems206a,206b,206c,206d,206e, and206fconnected to cloud202. The cloud computing environment200includes one or more cloud data centers. In general, a cloud data center, for example the cloud data center204ashown inFIG.2, refers to the physical arrangement of servers that make up a cloud, for example the cloud202shown inFIG.2, or a particular portion of a cloud. For example, servers are physically arranged in the cloud datacenter into rooms, groups, rows, and racks. A cloud datacenter has one or more zones, which include one or more rooms of servers. Each room has one or more rows of servers, and each row includes one or more racks. Each rack includes one or more individual server nodes. In an embodiment, servers in zones, rooms, racks, and/or rows are arranged into groups based on physical infrastructure requirements of the datacenter facility, which include power, energy, thermal, heat, and/or other requirements. In an embodiment, the server nodes are similar to the computer system described inFIG.3. The data center204ahas many computing systems distributed through many racks. The cloud202includes cloud data centers204a,204b, and204calong with the network and networking resources (for example, networking equipment, nodes, routers, switches, and networking cables) that interconnect the cloud data centers204a,204b, and204cand help facilitate the computing systems'206a-faccess to cloud computing services. In an embodiment, the network represents any combination of one or more local networks, wide area networks, or internetworks coupled using wired or wireless links deployed using terrestrial or satellite connections. Data exchanged over the network, is transferred using any number of network layer protocols, such as Internet Protocol (IP), Multiprotocol Label Switching (MPLS), Asynchronous Transfer Mode (ATM), Frame Relay, etc. Furthermore, in embodiments where the network represents a combination of multiple sub-networks, different network layer protocols are used at each of the underlying sub-networks. In some embodiments, the network represents one or more interconnected internetworks, such as the public Internet. The computing systems206a-for cloud computing services consumers are connected to the cloud202through network links and network adapters. In an embodiment, the computing systems206a-fare implemented as various computing devices, for example servers, desktops, laptops, tablet, smartphones, Internet of Things (IoT) devices, autonomous vehicles (including, cars, drones, shuttles, trains, buses, etc.) and consumer electronics. In an embodiment, the computing systems206a-fare implemented in or as a part of other systems. FIG.3illustrates a computer system300. In an embodiment, the computer system300is a special purpose computing device. The special-purpose computing device is hard-wired to perform the techniques or includes digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. In various embodiments, the special-purpose computing devices are desktop computer systems, portable computer systems, handheld devices, network devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. In an embodiment, the computer system300includes a bus302or other communication mechanism for communicating information, and a hardware processor304coupled with a bus302for processing information. The hardware processor304is, for example, a general-purpose microprocessor. The computer system300also includes a main memory306, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus302for storing information and instructions to be executed by processor304. In an embodiment, the main memory306is used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor304. Such instructions, when stored in non-transitory storage media accessible to the processor304, render the computer system300into a special-purpose machine that is customized to perform the operations specified in the instructions. In an embodiment, the computer system300further includes a read only memory (ROM)308or other static storage device coupled to the bus302for storing static information and instructions for the processor304. A storage device310, such as a magnetic disk, optical disk, solid-state drive, or three-dimensional cross point memory is provided and coupled to the bus302for storing information and instructions. In an embodiment, the computer system300is coupled via the bus302to a display312, such as a cathode ray tube (CRT), a liquid crystal display (LCD), plasma display, light emitting diode (LED) display, or an organic light emitting diode (OLED) display for displaying information to a computer user. An input device314, including alphanumeric and other keys, is coupled to bus302for communicating information and command selections to the processor304. Another type of user input device is a cursor controller316, such as a mouse, a trackball, a touch-enabled display, or cursor direction keys for communicating direction information and command selections to the processor304and for controlling cursor movement on the display312. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x-axis) and a second axis (e.g., y-axis), that allows the device to specify positions in a plane. According to one embodiment, the techniques herein are performed by the computer system300in response to the processor304executing one or more sequences of one or more instructions contained in the main memory306. Such instructions are read into the main memory306from another storage medium, such as the storage device310. Execution of the sequences of instructions contained in the main memory306causes the processor304to perform the process steps described herein. In alternative embodiments, hard-wired circuitry is used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media includes non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, solid-state drives, or three-dimensional cross point memory, such as the storage device310. Volatile media includes dynamic memory, such as the main memory306. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NV-RAM, or any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise the bus302. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infrared data communications. In an embodiment, various forms of media are involved in carrying one or more sequences of one or more instructions to the processor304for execution. For example, the instructions are initially carried on a magnetic disk or solid-state drive of a remote computer. The remote computer loads the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to the computer system300receives the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector receives the data carried in the infrared signal and appropriate circuitry places the data on the bus302. The bus302carries the data to the main memory306, from which processor304retrieves and executes the instructions. The instructions received by the main memory306may optionally be stored on the storage device310either before or after execution by processor304. The computer system300also includes a communication interface318coupled to the bus302. The communication interface318provides a two-way data communication coupling to a network link320that is connected to a local network322. For example, the communication interface318is an integrated service digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface318is a local area network (LAN) card to provide a data communication connection to a compatible LAN. In an embodiment, wireless links are also implemented. In any such implementation, the communication interface318sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. The network link320typically provides data communication through one or more networks to other data devices. For example, the network link320provides a connection through the local network322to a host computer324or to a cloud data center or equipment operated by an Internet Service Provider (ISP)326. The ISP326in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet”328. The local network322and Internet328both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on the network link320and through the communication interface318, which carry the digital data to and from the computer system300, are example forms of transmission media. In an embodiment, the network320contains the cloud202or a part of the cloud202described above. The computer system300sends messages and receives data, including program code, through the network(s), the network link320, and the communication interface318. In an embodiment, the computer system300receives code for processing. The received code is executed by the processor304as it is received, and/or stored in storage device310, or other non-volatile storage for later execution. Autonomous Vehicle Architecture FIG.4shows an example architecture400for an autonomous vehicle (e.g., the AV100shown inFIG.1). The architecture400includes a perception module402(sometimes referred to as a perception circuit), a planning module404(sometimes referred to as a planning circuit), a control module406(sometimes referred to as a control circuit), a localization module408(sometimes referred to as a localization circuit), and a database module410(sometimes referred to as a database circuit). Each module plays a role in the operation of the AV100. Together, the modules402,404,406,408, and410may be part of the AV system120shown inFIG.1. In some embodiments, any of the modules402,404,406,408, and410is a combination of computer software (e.g., executable code stored on a computer-readable medium) and computer hardware (e.g., one or more microprocessors, microcontrollers, application-specific integrated circuits [ASICs]), hardware memory devices, other types of integrated circuits, other types of computer hardware, or a combination of any or all of these things). Each of the modules402,404,406,408, and410is sometimes referred to as a processing circuit (e.g., computer hardware, computer software, or a combination of the two). A combination of any or all of the modules402,404,406,408, and410is also an example of a processing circuit. In use, the planning module404receives data representing a destination412and determines data representing a trajectory414(sometimes referred to as a route) that can be traveled by the AV100to reach (e.g., arrive at) the destination412. In order for the planning module404to determine the data representing the trajectory414, the planning module404receives data from the perception module402, the localization module408, and the database module410. The perception module402identifies nearby physical objects using one or more sensors121, e.g., as also shown inFIG.1. The objects are classified (e.g., grouped into types such as pedestrian, bicycle, automobile, traffic sign, etc.) and a scene description including the classified objects416is provided to the planning module404. The planning module404also receives data representing the AV position418from the localization module408. The localization module408determines the AV position by using data from the sensors121and data from the database module410(e.g., a geographic data) to calculate a position. For example, the localization module408uses data from a GNSS (Global Navigation Satellite System) sensor and geographic data to calculate a longitude and latitude of the AV. In an embodiment, data used by the localization module408includes high-precision maps of the roadway geometric properties, maps describing road network connectivity properties, maps describing roadway physical properties (such as traffic speed, traffic volume, the number of vehicular and cyclist traffic lanes, lane width, lane traffic directions, or lane marker types and locations, or combinations of them), and maps describing the spatial locations of road features such as crosswalks, traffic signs or other travel signals of various types. In an embodiment, the high-precision maps are constructed by adding data through automatic or manual annotation to low-precision maps. The control module406receives the data representing the trajectory414and the data representing the AV position418and operates the control functions420a-c(e.g., steering, throttling, braking, and/or ignition) of the AV in a manner that will cause the AV100to travel the trajectory414to the destination412. For example, if the trajectory414includes a left turn, the control module406will operate the control functions420a-cin a manner such that the steering angle of the steering function will cause the AV100to turn left and the throttling and braking will cause the AV100to pause and wait for passing pedestrians or vehicles before the turn is made. Autonomous Vehicle Inputs FIG.5shows an example of inputs502a-d(e.g., sensors121shown inFIG.1) and outputs504a-d(e.g., sensor data) that is used by the perception module402(FIG.4). One input502ais a LiDAR (Light Detection and Ranging) system (e.g., LiDAR123shown inFIG.1). LiDAR is a technology that uses light (e.g., bursts of light such as infrared light) to obtain data about physical objects in its line of sight. A LiDAR system produces LiDAR data as output504a. For example, LiDAR data is collections of 3D or 2D points (also known as a point clouds) that are used to construct a representation of the environment190. Another input502bis a RADAR system. RADAR is a technology that uses radio waves to obtain data about nearby physical objects. RADARs can obtain data about objects not within the line of sight of a LiDAR system. A RADAR system502bproduces RADAR data as output504b. For example, RADAR data are one or more radio frequency electromagnetic signals that are used to construct a representation of the environment190. Another input502cis a camera system. A camera system uses one or more cameras (e.g., digital cameras using a light sensor such as a charge-coupled device [CCD]) to obtain information about nearby physical objects. A camera system produces camera data as output504c. Camera data often takes the form of image data (e.g., data in an image data format such as RAW, JPEG, PNG, etc.). In some examples, the camera system has multiple independent cameras, e.g., for the purpose of stereopsis (stereo vision), which enables the camera system to perceive depth. Although the objects perceived by the camera system are described here as “nearby,” this is relative to the AV. In use, the camera system may be configured to “see” objects far, e.g., up to a kilometer or more ahead of the AV. Accordingly, the camera system may have features such as sensors and lenses that are optimized for perceiving objects that are far away. Another input502dis a traffic light detection (TLD) system. A TLD system uses one or more cameras to obtain information about traffic lights, street signs, and other physical objects that provide visual navigation information. A TLD system produces TLD data as output504d. TLD data often takes the form of image data (e.g., data in an image data format such as RAW, JPEG, PNG, etc.). A TLD system differs from a system incorporating a camera in that a TLD system uses a camera with a wide field of view (e.g., using a wide-angle lens or a fish-eye lens) in order to obtain information about as many physical objects providing visual navigation information as possible, so that the AV100has access to all relevant navigation information provided by these objects. For example, the viewing angle of the TLD system may be about 120 degrees or more. In some embodiments, outputs504a-dare combined using a sensor fusion technique. Thus, either the individual outputs504a-dare provided to other systems of the AV100(e.g., provided to a planning module404as shown inFIG.4), or the combined output can be provided to the other systems, either in the form of a single combined output or multiple combined outputs of the same type (e.g., using the same combination technique or combining the same outputs or both) or different types type (e.g., using different respective combination techniques or combining different respective outputs or both). In some embodiments, an early fusion technique is used. An early fusion technique is characterized by combining outputs before one or more data processing steps are applied to the combined output. In some embodiments, a late fusion technique is used. A late fusion technique is characterized by combining outputs after one or more data processing steps are applied to the individual outputs. FIG.6shows an example of a LiDAR system602(e.g., the input502ashown inFIG.5). The LiDAR system602emits light604a-cfrom a light emitter606(e.g., a laser transmitter). Light emitted by a LiDAR system is typically not in the visible spectrum; for example, infrared light is often used. Some of the light604bemitted encounters a physical object608(e.g., a vehicle) and reflects back to the LiDAR system602. (Light emitted from a LiDAR system typically does not penetrate physical objects, e.g., physical objects in solid form.) The LiDAR system602also has one or more light detectors610, which detect the reflected light. In an embodiment, one or more data processing systems associated with the LiDAR system generates an image612representing the field of view614of the LiDAR system. The image612includes information that represents the boundaries616of a physical object608. In this way, the image612is used to determine the boundaries616of one or more physical objects near an AV. FIG.7shows the LiDAR system602in operation. In the scenario shown in this figure, the AV100receives both camera system output504cin the form of an image702and LiDAR system output504ain the form of LiDAR data points704. In use, the data processing systems of the AV100compares the image702to the data points704. In particular, a physical object706identified in the image702is also identified among the data points704. In this way, the AV100perceives the boundaries of the physical object based on the contour and density of the data points704. FIG.8shows the operation of the LiDAR system602in additional detail. As described above, the AV100detects the boundary of a physical object based on characteristics of the data points detected by the LiDAR system602. As shown inFIG.8, a flat object, such as the ground802, will reflect light804a-demitted from a LiDAR system602in a consistent manner. Put another way, because the LiDAR system602emits light using consistent spacing, the ground802will reflect light back to the LiDAR system602with the same consistent spacing. As the AV100travels over the ground802, the LiDAR system602will continue to detect light reflected by the next valid ground point806if nothing is obstructing the road. However, if an object808obstructs the road, light804e-femitted by the LiDAR system602will be reflected from points810a-bin a manner inconsistent with the expected consistent manner. From this information, the AV100can determine that the object808is present. Path Planning FIG.9shows a block diagram900of the relationships between inputs and outputs of a planning module404(e.g., as shown inFIG.4). In general, the output of a planning module404is a route902from a start point904(e.g., source location or initial location), and an end point906(e.g., destination or final location). The route902is typically defined by one or more segments. For example, a segment is a distance to be traveled over at least a portion of a street, road, highway, driveway, or other physical area appropriate for automobile travel. In some examples, e.g., if the AV100is an off-road capable vehicle such as a four-wheel-drive (4WD) or all-wheel-drive (AWD) car, SUV, pick-up truck, or the like, the route902includes “off-road” segments such as unpaved paths or open fields. In addition to the route902, a planning module also outputs lane-level route planning data908. The lane-level route planning data908is used to traverse segments of the route902based on conditions of the segment at a particular time. For example, if the route902includes a multi-lane highway, the lane-level route planning data908includes trajectory planning data910that the AV100can use to choose a lane among the multiple lanes, e.g., based on whether an exit is approaching, whether one or more of the lanes have other vehicles, or other factors that vary over the course of a few minutes or less. Similarly, in an embodiment, the lane-level route planning data908includes speed constraints912specific to a segment of the route902. For example, if the segment includes pedestrians or un-expected traffic, the speed constraints912may limit the AV100to a travel speed slower than an expected speed, e.g., a speed based on speed limit data for the segment. In an embodiment, the inputs to the planning module404includes database data914(e.g., from the database module410shown inFIG.4), current location data916(e.g., the AV position418shown inFIG.4), destination data918(e.g., for the destination412shown in FIG.4), and object data920(e.g., the classified objects416as perceived by the perception module402as shown inFIG.4). In some embodiments, the database data914includes rules used in planning. Rules are specified using a formal language, e.g., using Boolean logic. In any given situation encountered by the AV100, at least some of the rules will apply to the situation. A rule applies to a given situation if the rule has conditions that are met based on information available to the AV100, e.g., information about the surrounding environment. Rules can have priority. For example, a rule that says, “if the road is a freeway, move to the leftmost lane” can have a lower priority than “if the exit is approaching within a mile, move to the rightmost lane.” FIG.10shows a directed graph1000used in path planning, e.g., by the planning module404(FIG.4). In general, a directed graph1000like the one shown inFIG.10is used to determine a path between any start point1002and end point1004. In real-world terms, the distance separating the start point1002and end point1004may be relatively large (e.g., in two different metropolitan areas) or may be relatively small (e.g., two intersections abutting a city block or two lanes of a multi-lane road). In an embodiment, the directed graph1000has nodes1006a-drepresenting different locations between the start point1002and the end point1004that could be occupied by an AV100. In some examples, e.g., when the start point1002and end point1004represent different metropolitan areas, the nodes1006a-drepresent segments of roads. In some examples, e.g., when the start point1002and the end point1004represent different locations on the same road, the nodes1006a-drepresent different positions on that road. In this way, the directed graph1000includes information at varying levels of granularity. In an embodiment, a directed graph having high granularity is also a subgraph of another directed graph having a larger scale. For example, a directed graph in which the start point1002and the end point1004are far away (e.g., many miles apart) has most of its information at a low granularity and is based on stored data, but also includes some high granularity information for the portion of the graph that represents physical locations in the field of view of the AV100. The nodes1006a-dare distinct from objects1008a-bwhich cannot overlap with a node. In an embodiment, when granularity is low, the objects1008a-brepresent regions that cannot be traversed by automobile, e.g., areas that have no streets or roads. When granularity is high, the objects1008a-brepresent physical objects in the field of view of the AV100, e.g., other automobiles, pedestrians, or other entities with which the AV100cannot share physical space. In an embodiment, some or all of the objects1008a-bare a static objects (e.g., an object that does not change position such as a street lamp or utility pole) or dynamic objects (e.g., an object that is capable of changing position such as a pedestrian or other car). The nodes1006a-dare connected by edges1010a-c. If two nodes1006a-bare connected by an edge1010a, it is possible for an AV100to travel between one node1006aand the other node1006b, e.g., without having to travel to an intermediate node before arriving at the other node1006b. (When we refer to an AV100traveling between nodes, we mean that the AV100travels between the two physical positions represented by the respective nodes.) The edges1010a-care often bidirectional, in the sense that an AV100travels from a first node to a second node, or from the second node to the first node. In an embodiment, edges1010a-care unidirectional, in the sense that an AV100can travel from a first node to a second node, however the AV100cannot travel from the second node to the first node. Edges1010a-care unidirectional when they represent, for example, one-way streets, individual lanes of a street, road, or highway, or other features that can only be traversed in one direction due to legal or physical constraints. In an embodiment, the planning module404uses the directed graph1000to identify a path1012made up of nodes and edges between the start point1002and end point1004. An edge1010a-chas an associated cost1014a-b. The cost1014a-bis a value that represents the resources that will be expended if the AV100chooses that edge. A typical resource is time. For example, if one edge1010arepresents a physical distance that is twice that as another edge1010b, then the associated cost1014aof the first edge1010amay be twice the associated cost1014bof the second edge1010b. Other factors that affect time include expected traffic, number of intersections, speed limit, etc. Another typical resource is fuel economy. Two edges1010a-bmay represent the same physical distance, but one edge1010amay require more fuel than another edge1010b, e.g., because of road conditions, expected weather, etc. When the planning module404identifies a path1012between the start point1002and end point1004, the planning module404typically chooses a path optimized for cost, e.g., the path that has the least total cost when the individual costs of the edges are added together. Autonomous Vehicle Control FIG.11shows a block diagram1100of the inputs and outputs of a control module406(e.g., as shown inFIG.4). A control module operates in accordance with a controller1102which includes, for example, one or more processors (e.g., one or more computer processors such as microprocessors or microcontrollers or both) similar to processor304, short-term and/or long-term data storage (e.g., memory random-access memory or flash memory or both) similar to main memory306, ROM308, and storage device310, and instructions stored in memory that carry out operations of the controller1102when the instructions are executed (e.g., by the one or more processors). In an embodiment, the controller1102receives data representing a desired output1104. The desired output1104typically includes a velocity, e.g., a speed and a heading. The desired output1104can be based on, for example, data received from a planning module404(e.g., as shown inFIG.4). In accordance with the desired output1104, the controller1102produces data usable as a throttle input1106and a steering input1108. The throttle input1106represents the magnitude in which to engage the throttle (e.g., acceleration control) of an AV100, e.g., by engaging the steering pedal, or engaging another throttle control, to achieve the desired output1104. In some examples, the throttle input1106also includes data usable to engage the brake (e.g., deceleration control) of the AV100. The steering input1108represents a steering angle, e.g., the angle at which the steering control (e.g., steering wheel, steering angle actuator, or other functionality for controlling steering angle) of the AV should be positioned to achieve the desired output1104. In an embodiment, the controller1102receives feedback that is used in adjusting the inputs provided to the throttle and steering. For example, if the AV100encounters a disturbance1110, such as a hill, the measured speed1112of the AV100is lowered below the desired output speed. In an embodiment, any measured output1114is provided to the controller1102so that the necessary adjustments are performed, e.g., based on the differential1113between the measured speed and desired output. The measured output1114includes measured position1116, measured velocity1118, (including speed and heading), measured acceleration1120, and other outputs measurable by sensors of the AV100. In an embodiment, information about the disturbance1110is detected in advance, e.g., by a sensor such as a camera or LiDAR sensor, and provided to a predictive feedback module1122. The predictive feedback module1122then provides information to the controller1102that the controller1102can use to adjust accordingly. For example, if the sensors of the AV100detect (“see”) a hill, this information can be used by the controller1102to prepare to engage the throttle at the appropriate time to avoid significant deceleration. FIG.12shows a block diagram1200of the inputs, outputs, and components of the controller1102. The controller1102has a speed profiler1202which affects the operation of a throttle/brake controller1204. For example, the speed profiler1202instructs the throttle/brake controller1204to engage acceleration or engage deceleration using the throttle/brake1206depending on, e.g., feedback received by the controller1102and processed by the speed profiler1202. The controller1102also has a lateral tracking controller1208which affects the operation of a steering controller1210. For example, the lateral tracking controller1208instructs the steering controller1210to adjust the position of the steering angle actuator1212depending on, e.g., feedback received by the controller1102and processed by the lateral tracking controller1208. The controller1102receives several inputs used to determine how to control the throttle/brake1206and steering angle actuator1212. A planning module404provides information used by the controller1102, for example, to choose a heading when the AV100begins operation and to determine which road segment to traverse when the AV100reaches an intersection. A localization module408provides information to the controller1102describing the current location of the AV100, for example, so that the controller1102can determine if the AV100is at a location expected based on the manner in which the throttle/brake1206and steering angle actuator1212are being controlled. In an embodiment, the controller1102receives information from other inputs1214, e.g., information received from databases, computer networks, etc. Trajectory Prediction Overview As mentioned above, this document describes different techniques for predicting how an agent (e.g., a vehicle, bicycle, pedestrian, etc.) will move in an environment based on movement of the surrounding objects. The techniques described below include a system that receives location data and past trajectory data for objects within a certain distance of the agent. As used herein, the term agent refers to an object (e.g., a vehicle, a bicycle, a pedestrian, or another suitable object) for which the system is attempting to predict a distribution over possible trajectories. As used herein, the term “location data” refers to a location of an object (e.g., a vehicle, a bicycle, a pedestrian, or another suitable object) in relation to an agent or another object in a detection range. As used herein, the term “past trajectory data” refers to a trajectory of a particular object (e.g., a vehicle, a bicycle, pedestrian, or another suitable object) for a specific amount of time (e.g., one second, two seconds, three seconds, or another suitable time). In an embodiment, the past trajectory data can include raw sensor data recorded over a past time interval (e.g., one second prior, two seconds prior, three seconds prior, or another suitable time). FIG.13shows an example of an image1300that can be received as location data and past trajectory data. The image1300includes depictions1302and1304of vehicles traveling along a lane with the current location of each vehicle and a past trajectory history of each vehicle. Another depiction1306shows multiple pedestrians in cross-walks and crossing a roadway. The image1300can be received by the system to perform various prediction techniques described below. In an embodiment, the image is constructed by overlapping map data, and other object data for multiple times (e.g., one second prior, two seconds prior, three seconds prior, or another suitable time). The actions in various figures described below (e.g.,FIG.14,FIG.16, andFIG.19) can be performed by various components described earlier in this document. For example, one or more processors146ofFIG.1can perform these actions. In an embodiment, some or all of the actions described below can be performed in a datacenter (e.g., datacenter204A) or in multiple datacenters (e.g., datacenters204A,204B, and/or204C as shown inFIG.2). In an embodiment, the actions described below can be performed by the perception circuit402, a planning circuit404, and/or a combination of both of these circuits). However, for clarity, this disclosure will refer to a system that performs the actions as a prediction system. Trajectory Prediction from Precomputed or Dynamically Generated Probability Map One trajectory prediction technique involves generating a probability map, sometimes referred to as a cost map or a heat map. The prediction system receives location data and past trajectory data for objects within a certain distance of the agent. Those objects could have been detected by that agent (e.g., if the agent is a vehicle the objects could have been detected by the sensors of the vehicle). The prediction system determines a set of features from the objects in the set, combines those features with motion data of an agent (e.g., speed acceleration, yaw rate, etc.), and generates (e.g., using a neural network) a probability map from the concatenated data set. The probability map includes multiple physical locations (e.g., squares of one meter resolution) such that each physical location is assigned a probability of the agent traversing that physical location. Based on the probability map, the prediction system generates one or more predicted trajectories for the agent. FIG.14is a block diagram of a process1400that can be performed to predict one or more trajectories of an object. At1405, the prediction system receives location data and past trajectory data for one or more objects detected by one or more sensors. For example, as discussed above, image1300ofFIG.13can be received as the location data and past trajectory data. In an embodiment, the prediction system can receive the location data and the past trajectory data in a different format. The prediction system can receive the past trajectory data for a past time interval (e.g., one second, two seconds, three seconds, or another suitable time interval). When the location data and the past trajectory data are received as part of an image, the image can include the past trajectory data for the one or more objects in a color coded format to indicate a corresponding past trajectory for each object of the one or more objects. For example, each of objects1302that is shown as having multiple colors with a gradually change of color to show time progression for the past trajectory data. At1410, the prediction system determines, using one or more processors and based on the location data and the past trajectory data, a set of features for the one or more objects. For example, in the embodiments where the location data and the past trajectory data are received as an image, the prediction system can input the image into a classifier, and receive from the classifier a plurality of features for the image. The features for the image can include velocities of various objects, locations and distances of those objects, and other suitable information. At1415, the prediction system combines (e.g., using one or more processors) the set of features with motion data of an agent to form a concatenated data set. For example, the prediction system can add, to the feature set, a vector that includes a speed, acceleration and yaw rate of the agent. At1420, the prediction system generates, using the one or more processors based on the concatenated data set, a probability map that includes a plurality of physical locations where each physical location of the plurality of physical locations is assigned a probability of an agent moving through that location. In an embodiment, generating the probability map includes inputting the concatenated data set into a neural network. A neural network can be configured to accept the concatenated data set as input. The neural network is trained using a training set. For example, the training set can include multiple sets of location data and past trajectory data for multiple objects. In an embodiment, the training set includes multiple images (e.g., in the same format as image1300). In addition, the training set includes a trajectory of the agent (e.g., the path that the agent traveled). The location data and the trajectory data (e.g., the image in the same format as image1300) can be input into a neural network and the neural network can return a predicted trajectory and a probability of that trajectory. In an embodiment the neural network returns multiple predicted trajectories and corresponding probabilities. The prediction system can compare the trajectory of the agent with a predicted trajectory to determine a difference between the two trajectories. The prediction system can take the difference and the probability and back-propagate that information through the neural network. For example, the prediction system can instruct the neural network to adjust node weights based on the differences and the probabilities. When multiple predicted trajectories and multiple probabilities are provided, the prediction system can generate multiple differences and back-propagate those differences and those probabilities through the neural network. The prediction system can repeat this process for every input of the training set to train the neural network. In an embodiment, the prediction system can take the following actions to train the neural network. In an embodiment, the prediction system can perform the training using the resources (e.g., processor(s), memory, and other suitable resources) of a vehicle. The training can be performed outside of the vehicle (e.g., at a datacenter204A as shown inFIG.2). The prediction system can receive training location data and training past trajectory data (e.g., one second, two seconds, or another suitable time interval of movement) for one or more training objects. The training location data and the training past trajectory data can be received as an image in the same format as image1300ofFIG.13. The prediction system can determine, based on the training location data and the training past trajectory data, a set of training features for the one or more training objects. For example, when the location data and the past trajectory data are received as an image, the prediction system can input the image into a classifier, and receive from the classifier a plurality of features for the image. The features for the image can include velocities of various objects, locations and distances of those objects, and other suitable information. The prediction system combines the set of training features with training motion data of an agent to form a training concatenated data set. For example, the prediction system can add a vector that includes a speed, acceleration, and yaw rate for each object. The concatenated data set is then used to generate a training probability map that includes a training plurality of physical locations, where each of the training plurality of physical locations is assigned a training probability of a training agent moving through that location. The prediction system then determines, based on the training probability map, one or more training trajectories for the training agent. This action and the above actions of training the neural network are similar to those actions when the neural network is executed to get a predicted trajectory. However, after a prediction is generated different actions are performed. Specifically, the prediction system compares the one or more training trajectories with a known trajectory of the training agent. Because the location data and the past trajectory data is part of the training set, the training set includes the trajectory that the agent moved along. Thus, one or more predicted trajectories (training trajectories) can be compared with the trajectory that the agent moved along. The prediction system then updates weights of a model (e.g., the neural network being trained) according to the comparing. As discussed above, the results of the comparison (e.g., difference(s) in the trajectories) can be back propagated through the neural network to adjust the weights of the neural network. Thus, updating the weights of the model according to the comparing can be performed by propagating a difference between each of the one or more training trajectories and the known trajectory through the model. In an embodiment, the prediction system can generate a data structure for a grid representing a detection range of one or more sensors of the agent. The grid can include multiple locations, and can represent the probability map. The prediction system can assign, to each location within the grid, a probability that the agent will be present in that location within the grid.FIG.15is an example of a portion of a data structure for a probability map for an agent. As shown by data structure1500ofFIG.15, each location1502within the probability map (e.g., a grid) stores a probability1504for the corresponding location with the grid. In an embodiment, the data structure1500can stored a time1506for each location within the grid. The time indicates an elapsed time from the input of the location data and the past trajectory data as the time is elapsing from the scene. The data structure can include other parameters as indicated by Parameter1field1508. In an embodiment, the grid can be adaptively sized. For example, the areas within the grid can be smaller sized spatially closer to the agent and larger sized spatially further away from the agent. This sizing enables for more prediction points close to the agent. In another example, the grid can be sized based on time. For example, for the first few seconds, the prediction system can generate more coordinates than for the next few seconds, enabling more prediction data to be processed. Referring back toFIG.14, at1425, the prediction system determines, based on the probability map, one or more predicted trajectories for the vehicle. The prediction system can select a highest probability trajectory from the resulting probability map. During the selection process the prediction system can access the data structure associated with the grid and retrieve the highest probability locations within the grid for each time interval (e.g., one second, two seconds, and/or three seconds). The prediction system can then use the selected locations as the predicted trajectory for the agent. In an embodiment, the prediction system can select multiple trajectories. For example, there may be three predicted trajectories with different probabilities one for going straight, one for making a right turn, and one for making a left turn. At1430, the prediction system causes, based on the one or more predicted trajectories using a planning circuit of a vehicle, generation of one or more driving commands for the vehicle. The prediction system can be located, at least partially, in a vehicle that is using the prediction system to predict how other objects (e.g., agents) will move. Thus, the planning circuit of the vehicle can use one or more predicted trajectories for the objects to generate driving commands for the vehicle. Thus, a vehicle can include one or more computer-readable media storing computer-executable instructions and one or more processors configured to execute the computer-executable instructions carrying out process1400. In an embodiment, the prediction system, at least partially, resides outside of the vehicle (e.g., in a datacenter204A as shown inFIG.2). Thus, the prediction system can transmit the predicted trajectories for the objects detected by the vehicle to the vehicle and the vehicle (e.g., using the planning circuit) can generate driving commands based on the received trajectories. In an embodiment, the driving commands can be generated remotely from the vehicle (e.g., at a datacenter204A as shown inFIG.2) and are transmitted to the vehicle for execution. At1435, the prediction system causes, using a control circuit of the vehicle, operation of the vehicle based on the one or more driving commands. For example, the planning circuit can transmit the driving commands to the control circuit for execution. The control circuit of the vehicle can interpret and execute the commands to drive the vehicle on a trajectory that avoids the detected objects (e.g., agents) based on the predicted trajectory of those objects. The actions described in relation to trajectory prediction from precomputed or dynamically generated probability map can be stored on a non-transitory computer-readable storage medium as one or more programs for execution by one or more processors (e.g., on a vehicle, at a datacenter, or another suitable location). The one or more programs can include instructions which, when executed by the one or more processors, cause performance of the computer implemented method(s) described above. Trajectory Prediction from a Trajectory Lattice Another trajectory prediction technique involves generating a trajectory lattice, for an agent (e.g., a vehicle, bicycle, pedestrian, or another suitable object). The prediction system receives location data and past trajectory data for objects within a certain distance of the agent. Those objects could have been detected by that agent (e.g., if the agent is a vehicle the objects could have been detected by the sensors of the vehicle). The prediction system determines a set of features from those objects, combines the features in the set with motion data of an agent (e.g., speed acceleration, yaw rate, etc.), and generates (e.g., using a neural network) a trajectory lattice for the agent. The trajectory lattice includes multiple trajectories for the agent. In an embodiment, each trajectory in the trajectory lattice has a corresponding probability. Based on the trajectory lattice, the prediction system generates one or more predicted trajectories for the agent. FIG.16is a block diagram of a process1600that can be performed to predict one or more trajectories of an agent. At1605, the prediction system receives location data and past trajectory data for one or more objects detected by one or more sensors. For example, as discussed above, image1300ofFIG.13can be received as the location data and past trajectory data. In an embodiment, the prediction system can receive the location data and the past trajectory data in a different format than an image. The prediction system can receive the past trajectory data for a past time interval (e.g., one second, two seconds, three seconds, or another suitable time interval). When the location data and the past trajectory data are received as part of an image, the image can include the past trajectory data for the one or more objects in a color coded format to indicate a corresponding past trajectory for each object of the one or more objects. For example, each of objects1302that is shown as having multiple colors with a gradually change of color to show time progression for the past trajectory data. At1610, the prediction system determines, using one or more processors and based on the location data and the past trajectory data, a set of features for the one or more objects. For example, in the embodiments where the location data and the past trajectory data are received as an image, the prediction system can input the image into a classifier, and receive from the classifier a plurality of features for the image. The features for the image can include velocities of various objects, locations and distances of those objects, and other suitable information. At1615, the prediction system combines (e.g., using one or more processors) the set of features with motion data of an agent to form a concatenated data set. For example, the prediction system can add, to the feature set, a vector that includes a speed, acceleration and yaw rate of the agent. At1620, the prediction system generates, based on the concatenated data set, a trajectory lattice that includes a plurality of possible trajectories for the agent, where each trajectory in the trajectory lattice is assigned a probability. For example, the prediction system can input the concatenated data set into a neural network and receive, from the neural network, data for the trajectory lattice.FIG.17shows one possible trajectory lattice1702that can be generated by the prediction system. In an embodiment, the prediction system generates a data structure for the trajectory lattice. The data structure can include a plurality of fields for each trajectory in the trajectory lattice. The fields can include a coordinate field for storing the coordinates for each trajectory and a probability field for storing a probability for each trajectory. Other fields can be included in the trajectory lattice. In an embodiment, the prediction system can use a neural network (e.g., previously trained) to generate the trajectory lattice. The prediction system can take the following actions to train the neural network. The prediction system can perform the training using the resources (e.g., processor(s), memory, and other suitable resources) of a vehicle. In an embodiment, the training is performed outside of the vehicle (e.g., at a datacenter204A as shown inFIG.2). The prediction system can receive training location data and training past trajectory data (e.g., one second, two seconds, or another suitable time interval of movement) for one or more training objects. The training location data and the training past trajectory data can be received as an image in the same format as image1300ofFIG.13. The prediction system can determine, based on the training location data and the training past trajectory data, a set of training features for the one or more training objects. For example, when the location data and the past trajectory data are received as an image, the prediction system can input the image into a classifier, and receive from the classifier a plurality of features for the image. The features for the image can include velocities of various objects, locations and distances of those objects, and other suitable information. The prediction system combines the set of training features with training motion data of an agent to form a training concatenated data set. For example, the prediction system can add a vector that includes a speed, acceleration, and yaw rate for each object. The concatenated data set is then used to generate a training trajectory lattice that includes a training plurality of predicted trajectories, where each of the training plurality of predicted trajectories is assigned a training probability of a training agent (e.g., a probability that the agent will travel the specific trajectory). The prediction system then determines, based on the training trajectory lattice, one or more training trajectories for the training agent. This action and the above actions of training the neural network are similar to those actions when the neural network is executed to get a predicted trajectory. However, after a prediction is generated different actions are performed. Specifically, the prediction system compares the one or more training trajectories with a known trajectory of the training agent. Because the location data and the past trajectory data is part of the training set, the training set includes the trajectory that the agent moved along. Thus, one or more predicted trajectories (training trajectories) can be compared with the trajectory that the agent moved along. The prediction system then updates weights of a model (e.g., the neural network being trained) according to the comparing. As discussed above, the results of the comparison (e.g., difference(s) in the trajectories) can be back propagated through the neural network to adjust the weights of the neural network. Thus, updating the weights of the model according to the comparing can be performed by propagating a difference between each of the one or more training trajectories and the known trajectory through the model. This process can be repeated for each set of training data available. Thus, the trajectory lattice can be dynamically generated based on agent state (e.g., speed, acceleration, yaw rate, and/or another state component). As discussed above, in an embodiment, the trajectory lattice can also be based on environmental context for the agent (e.g., road network, map data, other objects, etc.). Referring back toFIG.16, at1625, the prediction system determines, based on the trajectory lattice, one or more predicted trajectories for the agent. For example, the prediction system can select a trajectory with a highest probability. In an embodiment, the prediction system uses motion data of the agent to determine the one or more predicted trajectories. The prediction system can receive one or more of speed, acceleration, and yaw rate of the agent, and identify, in the trajectory lattice, those trajectories that the agent cannot travel based on the one or more of the speed, acceleration, and yaw rate of the agent. The prediction system can remove those trajectories from the trajectory lattice. The prediction system can select one or more trajectories from the updated trajectory lattice (e.g., based on a probability of each trajectory in the trajectory lattice). Trajectory lattice1704and trajectory lattice1706illustrate different possible trajectories based on the speed of the vehicle. Trajectory lattice1704illustrates possible trajectories for the speed of two meters per second of the agent. Based on that speed, there are many trajectories in many directions that are possible. Trajectory lattice1706illustrates possible trajectories for the speed of ten meters per second. As illustrated in trajectory lattice1706, the agent (e.g., a vehicle) is unable to make certain turns at that speed, thus those trajectories requiring such turns are not included (e.g., are removed) from trajectory lattice1706. To identify trajectories that are not possible to execute at certain speeds, the prediction system can store (e.g., for each object type such as a vehicle, bicycle, pedestrian, or other object types) various maneuvers (e.g., turns) that cannot be executed at corresponding speeds. For example, if a U-turn maneuver cannot be executed at the speed of ten meters per second, the prediction system can store that information and access that information to prune the trajectory lattice accordingly. In an embodiment, the prediction system uses road rules data for determining one or more predicted trajectories. The prediction system can receive one or more of road rules data (e.g., data representing traffic rules) and road marking data (e.g., lane markings, cross-walk markings, etc.). The data representing traffic rules can include speed limit data, traffic light data (e.g., green, red, or yellow), and other suitable traffic rules data. The road marking data can include lane markings (e.g., which lanes travel in which directions), cross-walk markings (e.g., for determining where pedestrians are likely to cross), and other suitable road marking data. The prediction system can identify, in the trajectory lattice, those trajectories that the agent cannot travel based on the one or more of the road rules data and the road marking data, and remove those trajectories from the trajectory lattice. For example, if a predicted trajectory exists where an agent (e.g., a vehicle) makes a left turn, but according to the road rules a left turn is not allowed at that location, the prediction system can remove that rejection from the rejection lattice. Trajectory lattice1708illustrates a trajectory lattice with a number of trajectories removed based on road rules and/or road markings. The illustration shows that there is no trajectories for turning left (e.g., because turning left would require going again direction of the traffic. Referring back toFIG.16, at1630, the prediction system causes, based on the one or more predicted trajectories using a planning circuit of a vehicle, generation of one or more driving commands for the vehicle. In an embodiment, the prediction system is located, at least partially, in a vehicle that is using the prediction system to predict how agents (e.g., vehicles, pedestrians, bicyclists, and/or other suitable agents) will move. Thus, the planning circuit of the vehicle can use one or more predicted trajectories for the objects to generate driving commands for the vehicle. Thus, a vehicle can include one or more computer-readable media storing computer-executable instructions and one or more processors configured to execute the computer-executable instructions carrying out process1600. In an embodiment, the prediction system, at least partially resides outside of the vehicle (e.g., in a datacenter204A as shown inFIG.2). Thus, the prediction system can transmit the predicted trajectories for the objects detected by the vehicle to the vehicle and the vehicle (e.g., using the planning circuit) can generate driving commands based on the received trajectories. Thus, the driving commands can be generated remotely from the vehicle (e.g., at a datacenter204A as shown inFIG.2) and are transmitted to the vehicle for execution. At1635, the prediction system causes, using a control circuit of the vehicle, operation of the vehicle based on the one or more driving commands. For example, the planning circuit can transmit the driving commands to the control circuit for execution. The control circuit of the vehicle can interpret and execute the commands to drive the vehicle on a trajectory that avoids the detected objects (e.g., agents) based on the predicted trajectory of those objects. The actions described in relation to trajectory prediction from a trajectory lattice can be stored on a non-transitory computer-readable storage medium as one or more programs for execution by one or more processors (e.g., on a vehicle, at a datacenter, or another suitable location). The one or more programs can include instructions which, when executed by the one or more processors, cause performance of the computer implemented method(s) described above. FIG.18illustrates a multiple predicted trajectories and the traveled trajectory of an agent (a vehicle in this instance). The traveled trajectory is sometimes referred to as ground truth. Trajectories1802and1806show two predicted trajectories while trajectory1804illustrates the traveled trajectory. As illustrated fromFIG.18, trajectory1806is the best trajectory in the set of predicted trajectories because it is the closest to the traveled trajectory (the ground truth). Trajectory Prediction from Multi-Modal Regression Another trajectory prediction technique involves training a classifier (e.g., a neural network) to generate multiple trajectory predictions for an agent (e.g., a vehicle, bicycle, pedestrian, or another suitable object). Specifically, the model regresses coordinates and also applies a classification component to the loss such that probability values associated with each of the regressed trajectories are produced. The data being used in this embodiment is training data that also includes trajectories that the agent has traveled. As part of the training process the prediction system can use that data, as described below. The prediction system receives location data and past trajectory data for objects within a certain distance of the agent. Those objects could have been detected by that agent (e.g., if the agent is a vehicle the objects could have been detected by the sensors of the vehicle). The prediction system learns or determines a set of features from those objects and combines the features in the set with motion data of an agent (e.g., speed acceleration, yaw rate, etc.). The prediction system generates a plurality of predicted trajectories (e.g., by regressing to the coordinates, and using the classification component in the loss to predict probabilities associated with those trajectories), based on the concatenated data set. The prediction system then uses angles between the predicted trajectories to select a trajectory out of the plurality of predicted trajectories that best matches the ground truth (i.e., the traveled trajectory). The selected trajectory is then used to compute how far off the prediction was from the ground truth. That information is then used to train the neural network. In an embodiment, instead of using angles between the predicted trajectories and the traveled trajectory to train the model, the system can use a different metric. Based on the trained model, the prediction system generates one or more predicted trajectories for the agent and uses that information in the planning and driving algorithms of a vehicle. FIG.19is a block diagram of a process1900that can be performed to train a classifier to predict one or more trajectories of an agent. At1905, the prediction system receives location data and past trajectory data for one or more objects detected by one or more sensors. For example, as discussed above, image1300ofFIG.13can be received as the location data and past trajectory data. In an embodiment, the prediction system can receive the location data and the past trajectory data in a different format than image1300. The prediction system can receive the past trajectory data for a past time interval (e.g., one second, two seconds, three seconds, or another suitable time interval). When the location data and the past trajectory data are received as part of an image, the image can include the past trajectory data for the one or more objects in a color coded format to indicate a corresponding past trajectory for each object of the one or more objects. For example, each of objects1302that is shown as having multiple colors with a gradually change of color to show time progression for the past trajectory data. At1910, the prediction system determines, using one or more processors and based on the location data and the past trajectory data, a set of features for the one or more objects. In the embodiments where the location data and the past trajectory data are received as an image, the prediction system receives the image as input, and outputs as an intermediary result a plurality of features for the image. At1915, the prediction system combines (e.g., using one or more processors) the set of features with motion data of an agent (e.g., a vehicle, bicycle, pedestrian, or another suitable agent) to form a concatenated data set. For example, the prediction system can add, to the feature set, a vector that includes a speed, acceleration and yaw rate of the agent. At1920, the prediction system generates, based on the concatenated data set, a plurality of predicted trajectories. For example, the prediction system can input the concatenated data set into a neural network and receive, from the neural network, data for the predicted trajectories. The data for the predicted trajectories can be stored in a data structure that can include a plurality of fields for each predicted trajectory. The fields can include a coordinate field for storing the coordinates for each trajectory and a probability field for storing a probability for each trajectory. Other fields can be included in the data structure. In addition, the prediction system can retrieve a number of desired predicted trajectories from memory. The predicted trajectories can be retrieved with corresponding probabilities. In an embodiment, both the trajectories and the corresponding probability values can be predicted in parallel by the neural network. Thus, each of the plurality of the fields can include coordinates and also include the probability value for each trajectory. Thus, the loss of the neural network can contain two components: one for the classification that predict probability values for each trajectory and one for the regression that regresses to the coordinates (i.e., to predict the actual coordinate values that make up the trajectories). At1925, the prediction system calculates a plurality of angles between each of the plurality of predicted trajectories and the trajectory that the agent has traveled. For example, for a given predicted trajectory, the angle between that predicted trajectory and the trajectory the agent has traveled (i.e., the ground truth) can be computed by taking the straight line between the center of the agent and the last point in the traveled trajectory (the ground truth) and a straight line between the center of the vehicle and the last point in the predicted trajectory, and computing the angle value between the two lines (e.g., an angle in degrees in the range between zero and one hundred and eighty). In an embodiment, instead of angles, a different metric can be used in the prediction system. The prediction system can calculate a plurality of metrics between each of the plurality of predicted trajectories and a trajectory that the agent has traveled. At1930, the prediction system determines whether one or more of the plurality of angles is within a threshold. A threshold can be any suitable angle (e.g., seven degrees, eight degree, nine degrees, or another suitable angle) and can be obtained empirically. The prediction system can compare, for each predicted trajectory, an angle between that trajectory (as described above) and a threshold angle (e.g., seven degrees). In an embodiment, the prediction system can determine whether one or more of the plurality of metrics is within a threshold At1935, if the prediction system, determines that none of the plurality of angles are within the threshold, selects a best trajectory of the plurality of predicted trajectories using a function. In many instances, the scenario where none of the plurality of angles are within a threshold occurs at the beginning of the training routine for the neural network. As the neural network has not been well trained yet, predicted trajectories are generally not very accurate resulting in large differences (e.g., large angles) between the projected trajectories and the traveled trajectory (i.e., the ground truth). In an embodiment, the function selects a trajectory of the plurality of predicted trajectories randomly. For example, the prediction system can retrieve the number of predicted trajectories and input that number into a random number generator. The prediction system receives the output of the random number generator and selects the corresponding predicted trajectory based on the output. If the prediction system determines a subset of modes with an angle value below the threshold, the prediction system selects a mode out of that subset of modes that minimizes the average of L2 norms. In an embodiment, instead of angles, the prediction system can use a different metric. Various other ways can be used to select a predicted trajectory, sometimes referred to as best mode. For example, each time a predicted trajectory can be selected based on minimizing an average of a specific metric (e.g., an L2 norm between each predicted trajectory and the traveled trajectory). Taking that approach, the prediction system would encounter the issue of mode collapse. This is, because the prediction system (e.g., a neural network) selects one predicted trajectory (e.g., one mode) initially, and then computes the per-agent loss (described below) using that predicted trajectory mode, updating weights corresponding to that trajectory during the backpropagation processes. At the next training input, the prediction system would again select the same predicted trajectory as the best one because it has now improved that trajectory resulting in that same predicted trajectory providing the best metric (e.g., average of L2 norms). This scenario would lead to the mode collapse issue, where the prediction system would only train the model (e.g., a neural network) for one predicted trajectory. Thus, other possible trajectory training opportunities are lost. Selecting a random trajectory to be trained when no angles are within a threshold enables exploring all trajectories during training. In an embodiment, the function selects a trajectory of the plurality of predicted trajectories based on a plurality of templates. As referred to in this document, a template refers to a template trajectory. The prediction system can generate templates by employing clustering techniques on training set data to obtain a number of templates equal to the number of predicted trajectories selected at the beginning of training. For example, the prediction system can analyze each traveled trajectory within the training set and determine a plurality of clusters of trajectories within that training set. For example, there may be a cluster of trajectories for turning right at a specific angle, a cluster of trajectories for turning left at a specific angle, a cluster of trajectories for moving straight, or another suitable cluster of trajectories. If the prediction system is configured to use templates while training, the angle comparison for each predicted trajectory is with each template trajectory (e.g., based on the angle between the predicted trajectory and the traveled trajectory being above the threshold). Thus, the predicted trajectory is selected based on the index of the template that was selected. That is, if the prediction system includes templates numbered 0, 1, 2, and the prediction system selects template 2 as the best matching template, the prediction system selects mode 2 as the best matching mode to use further in the model. In an embodiment, to prevent mode collapse, the prediction system generates template trajectories based on high-level classes. These template trajectories are referred to as “fixed templates” because these template trajectories are generated for an agent and do not change depending on the different motions or states of the agent. For example, if the number of trajectories selected for training is three, the prediction system can generate templates for left-turn, right-turn, and straight trajectories. If the number of trajectories selected for training is five, the prediction system can generate template trajectories in five different directions. Thus, the prediction system generates the plurality of templates based on possible trajectories for the agent. The prediction system can generate templates based on an initial set of conditions that are based on an agent state (e.g., velocity, acceleration, yaw rate, heading, and/or another suitable condition). These templates are sometimes referred to as dynamic templates because they change depending on the different motions or states of the agent. In this case, the prediction system generates a different set of templates for each condition. Thus, the prediction system generates the plurality of templates based on a state of the agent (e.g., velocity, acceleration, yaw rate, heading, and/or another suitable condition). The prediction system can use the templates when an angle between a predicted trajectory and the traveled trajectory (e.g., the ground truth) is not within a threshold (e.g., instead of choosing a random predicted trajectory). However, the templates can also be used, for example, during some fixed number of iterations at the start of the training process, after which the prediction system switches to selecting the best predicted trajectory based on a metric (e.g., an L2 norm). Thus, the prediction system, based on determining that one or more of the plurality of angles are within the threshold (e.g., after a certain number of training iterations), selects a trajectory of the plurality of predicted trajectories based on a difference between the trajectory that the agent traveled and a corresponding trajectory for each predicted trajectory of the plurality of predicted trajectories. In an embodiment, the prediction system selects a template at random, and uses the selected template as the traveled trajectory (i.e., the ground truth) for the predicted trajectory with the same index. This results in training the different predicted trajectories to start resembling the templates. In an embodiment, the prediction system stops using the templates and uses the traveled trajectories (i.e., the ground truths) after the predicted trajectories are trained to look more reasonable (sometimes referred to as a “burn-in” phase, when the prediction system does not identify any angles between a predicted trajectory and the traveled trajectory below a threshold). Referring back toFIG.19, at1940, the prediction system computes a difference between the trajectory that was selected as the best matching trajectory and the trajectory the agent has traveled. For example, the prediction system can compute the per-agent loss by summing the regression loss for the selected predicted trajectory (i.e., the best mode) and also the classification loss for all of the predicted trajectories (i.e., modes). For example, one method to sum the regression loss is to use Smooth L1 loss and one way to sum the classification loss is to use cross entropy. Thus, the total loss can be a sum of regression loss using Smooth L1 loss and cross entropy. To compute the loss across the full training set, the prediction system sums all the per-agent losses (e.g., the loss for each instance of location data and past trajectory data, for example, image1300). At1945, the prediction system adjusts weights of a model (e.g., a neural network) based on the difference. For example, the differences are back-propagated through the model to adjust the weights of the model for better performance during the next iteration. At1950, the prediction system causes, based on the model using a planning circuit of a vehicle, generation of one or more driving commands for the vehicle. The prediction system can be located, at least partially, in a vehicle that is using the prediction system to predict how agents (e.g., vehicles, pedestrians, bicyclists) will move. Thus, the planning circuit of the vehicle can use one or more predicted trajectories for the objects to generate driving commands for the vehicle. In an embodiment, the planning circuit can use the predicted trajectories and the associated probabilities to generate the driving commands. That is, the prediction system can inform planning of the possible trajectories and also the likelihood how that each one is to occur. Thus, a vehicle can include one or more computer-readable media storing computer-executable instructions and one or more processors configured to execute the computer-executable instructions carrying out process1900. The prediction system can, at least partially, reside outside of the vehicle (e.g., in a datacenter204A as shown inFIG.2). Thus, the prediction system can transmit the predicted trajectories for the objects detected by the vehicle to the vehicle and the vehicle (e.g., using the planning circuit) can generate driving commands based on the received trajectories. In this scenario, the driving commands can be generated remotely from the vehicle (e.g., at a datacenter204A as shown inFIG.2) and are transmitted to the vehicle for execution. At1955, the prediction system causes, using a control circuit of the vehicle, operation of the vehicle based on the one or more driving commands. For example, the planning circuit can transmit the driving commands to the control circuit for execution. The control circuit of the vehicle can interpret and execute the commands to drive the vehicle on a trajectory that avoids the detected objects based on the predicted trajectory of those objects. In an embodiment, a trajectory that is within an ε (epsilon) value of the ground truth is not penalized. In one approach, the classification is reformulated as a multi-label problem. For example, y is used to denote an array of 1s and 0s, where each entry is 1 if that trajectory in a trajectory set is within ε of the ground truth and 0 otherwise. The new classification loss function is a modified entropy loss function, MCE, defined as follows in equation (1). MCE(y^,y)=1∑iyi∑iyi(-y^i[i]+log(∑kexp(y^k))(1) In equation (1), i is used to index the trajectories (classes). The MCE loss averages the log softmax values of the logits for the trajectories that are within an ε value of the ground truth. The softmax function refers to a function that takes as input a vector of K real numbers and normalizes it into a probability distribution including K probabilities proportional to the exponentials of the input numbers. The logit function refers to a type of function that creates a map of probability values. Different distance functions and values of E can be used, for example, the mean or max L2distance functions. In another approach, a weighted cross-entropy loss function is used. For example, d is used to denote the vector storing the element-wise L2distance between each trajectory in the trajectory set and the ground truth. For each entry in d that is smaller than ε, the entry is replaced with 0. The normalization of d is denoted by dnorm. The vector d sums to 1. The array y denotes an array of 1s and 0s, where each entry is 1 if that trajectory is the closest trajectory set to the ground truth and 0 otherwise. The new classification loss function is expressed as follows in equation (2). ∑i1K∑kdnormikyiklog(y^ik)(2) In equation (2), K denotes the size of the trajectory. Equation (2) represents a cross-entropy loss but weighted by the distance. In an embodiment, a penalty is added for trajectories that go off the road. In one approach, the model is used to classify which trajectories are off the road. For example, the expression L(ŷ, y) denotes any of the classification loss functions described above or the basic CoverNet loss function. An array r denotes an array of 1s and 0s where each entry is 1 if that trajectory in the trajectory set is entirely within the drivable area and 0 otherwise. The off road penalty is defines as follows in equation (3). Ω(y^)=1n∑ibce(σ(y^i),ri)(3) The bce term denotes the binary cross-entropy. The new loss function is expressed as follows in equation (4). L(ŷ,y)+λΩ(ŷ). (4) In equation (4), λϵ[0,∞]. An advantage of this approach is that the off-road penalty is formulated as a classification problem so the classification loss function and the off-road penalty can be naturally combined without rescaling the units. Additional Embodiments In an embodiment, one or more processors receive location data and past trajectory data for one or more objects detected by one or more sensors. The one or more processors determine, based on the location data and the past trajectory data, a set of features for the one or more objects. The one or more processors combine the set of features with motion data of an agent to form a concatenated data set. Based on the concatenated data set, a trajectory lattice is generated including multiple possible trajectories for the agent. Each trajectory in the trajectory lattice is assigned a probability. Based on the trajectory lattice, one or more predicted trajectories for the agent are determined. Based on the one or more predicted trajectories using a planning circuit of a vehicle, generation of one or more driving commands for the vehicle is caused. Using a control circuit of the vehicle, operation of the vehicle based on the one or more driving commands is caused. In an embodiment, generating the trajectory lattice for the agent includes inputting the concatenated data set into a neural network. From the neural network, data for the trajectory lattice is received. In an embodiment, determining, based on the trajectory lattice, the trajectory for the agent includes receiving one or more of speed, acceleration, and yaw rate of the agent. In the trajectory lattice, those trajectories that the agent cannot travel based on the one or more of the speed, acceleration, and yaw rate of the agent are identified. Those trajectories are removed from the trajectory lattice. In an embodiment, identifying, based on the trajectory lattice, the trajectory for the agent includes receiving one or more of road rules data and road marking data. In the trajectory lattice, those trajectories that the agent cannot travel based on the one or more of the road rules data and the road marking data are identified. Those trajectories are removed from the trajectory lattice. In an embodiment, receiving the past trajectory data includes receiving a trajectory of each object of the one or more object for a past time interval. In an embodiment, receiving the location data and the past trajectory data includes receiving an image. The image includes the location data for the one or more objects and the past trajectory data for the one or more objects. The past trajectory data is color coded to indicate a corresponding past trajectory for each object of the one or more objects relative to the location data. In an embodiment, determining the set of features for the one or more objects, includes inputting the image into a classifier, and receiving from the classifier multiple features for the image. In an embodiment, the one or more processors are located in the vehicle. In an embodiment, the one or more processors are located remotely from the vehicle. In an embodiment, training location data (e.g., position data for vehicles, bicycles, pedestrians, etc.) and training past trajectory data for one or more training objects is received. Based on the training location data and the training past trajectory data, a set of training features for the one or more training objects is determined. The set of training features is combined with training motion data of a training agent (e.g., speed, acceleration, and yaw rate of a vehicle) to form a training concatenated data set. Based on the training concatenated data set, a training trajectory lattice is determined. The training trajectory lattice includes a training set of predicated trajectories. Each trajectory in the trajectory lattice is assigned a probability. Based on the training trajectory lattice, one or more training trajectories for the training agent are determined. The one or more training trajectories are compared with a known trajectory of the training agent. Weights of a model are updated according to the comparing. In an embodiment, updating the weights of the model according to the comparing includes propagating a difference between each of the one or more training trajectories and the known trajectory through the model. In an embodiment, location data and past trajectory data for one or more objects are received. One or more processors determine, based on the location data and the past trajectory data, a set of features for the one or more objects. The set of features is combined with motion data of an agent to form a concatenated data set. From the concatenated data set, multiple predicted trajectories are determined. Multiple angles are calculated between each of the multiple predicted trajectories and a trajectory that the agent has traveled. Whether one or more of the multiple angles is within a threshold is determined. Based on determining that none of the multiple angles is within the threshold, a best trajectory of the multiple predicted trajectories using a function is selected. A difference between the best trajectory and the trajectory the agent has traveled is computed. Weights of a model based on the difference are adjusted. Based on the model and using a planning circuit of a vehicle, generation of one or more driving commands for the vehicle is caused. A control circuit of the vehicle causes operation of the vehicle based on the one or more driving commands. In an embodiment, generating the multiple predicted trajectories includes inputting the concatenated data set into a neural network. From the neural network, the multiple predicted trajectories are received. In an embodiment, based on determining that one or more of the multiple angles are within the threshold, a trajectory of the multiple predicted trajectories is selected based on a difference between the trajectory that the agent traveled and a corresponding trajectory for each predicted trajectory of the multiple predicted trajectories. In an embodiment, the function selects a trajectory of the multiple predicted trajectories randomly. In an embodiment, the function selects a trajectory of the multiple predicted trajectories based on multiple templates. In an embodiment, a clustering operation is performed on a training set to obtain the multiple templates. In an embodiment, the multiple templates are generated based on possible trajectories for one or more agents. In an embodiment, each template of the multiple templates is generated based on a state of the agent. In an embodiment, receiving the location data and the past trajectory data includes receiving an image. The image includes the location data for the one or more objects and the past trajectory data for the one or more objects. The past trajectory data is color coded to indicate a corresponding past trajectory for each object of the one or more objects. The actions described in relation to the multi-modal trajectory prediction can be stored on a non-transitory computer-readable storage medium as one or more programs for execution by one or more processors (e.g., on a vehicle, at a datacenter, or another suitable location). The one or more programs can include instructions which, when executed by the one or more processors, cause performance of the computer implemented method(s) described above. In the foregoing description, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. In addition, when we use the term “further comprising,” in the foregoing description or following claims, what follows this phrase can be an additional step or entity, or a sub-step/sub-entity of a previously-recited step or entity. | 119,619 |
11858509 | DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Referring toFIG.1, an operator offset request for automatic lane following system10and method for operation is provided in an automobile vehicle12. The automobile vehicle12may be proceeding in a forward direction14on a roadway16under automated, “hand's free” driving operation using an automated driving control system. In an initial condition, the automobile vehicle12is traveling in the forward direction14while tracking an artificially generated or projected first travel-line18of the roadway16such as a roadway centerline. It is anticipated that the operator becomes aware of a first object20such as another vehicle, a wide-load trailer, a pedestrian, an obstruction, a pothole, a construction item such as a cone or sign, or the like and wishes to change a lateral position of the automobile vehicle12to avoid the first object20. When actuated by the operator, if predetermined conditions are met the operator offset request for automatic lane following system10directs the automobile vehicle12to laterally displace away from the first travel-line18by a first lateral offset distance22moved by the automobile vehicle12in a first displacement path24until a new or second travel-line26is achieved which allows the automobile vehicle12to avoid the first object20. It is noted the first lateral offset distance22moved by the automobile vehicle12in the first displacement path24is an exemplary displacement distance directed toward an operator left-hand side. It will be apparent an equal but opposite right-hand side displacement distance and motion are also available for the first lateral offset distance as well as a maximum offset distance described in reference toFIG.5, as well as any selected offset distance between the first travel-line18and the maximum offset distance. Referring toFIG.2and again toFIG.1, the operator may actuate the operator offset request for automatic lane following system10by one or more manual operations as follows. In a first example, an operator input setting system27is actuated when a left hand28of the operator contacts a left-side surface30of a steering wheel32using a pressure or tapping force34acting normal to or on the left-side surface30. A force, pressure or capacitance exerted by the left hand28is sensed by a tactile sensor36such as a pressure sensor, a touch sensor, a capacitance sensor, or the like which is provided at the left-side surface30. An initiation signal is generated by the tactile sensor36which is forwarded to a controller described in greater detail in reference toFIG.9to initiate the operator offset request. Referring toFIG.3and again toFIGS.1and2, if in the judgment of the operator the first lateral offset distance22may not avoid the first object20, or if a second object38larger than the first object20is identified in the path of the automobile vehicle12, the operator offset request for automatic lane following system10provides for a maximum second lateral offset distance40greater than the first lateral offset distance22. If the second lateral offset distance40is selected, the automobile vehicle12moves in a second displacement path42until a new or third travel-line44outward of the second travel-line26is achieved which allows the automobile vehicle12to avoid the second object38while remaining within boundaries of the roadway16. Referring toFIG.4and again toFIGS.2and3, to select the maximum second lateral offset distance40the operator taps or presses twice on the left-side surface30of the steering wheel30as follows. The operator contacts the left-side surface30of the steering wheel32using the tapping or tapping force34of the operator's left hand designated28a, acting normal to or on the left-side surface30. The initiation signal identified above is generated by the tactile sensor36which is forwarded to the controller described in greater detail in reference toFIG.7. The operator then repeats the contact with the left-side surface30of the steering wheel32using a second pressure or second tapping force46of the operator's left hand designated28b, which may act normal to or on the left-side surface30. The time interval between the first contact and the second contact with the steering wheel32can vary and can range from approximately 0.1 ms up to approximately 1 second, which is a predetermined time interval range. The second contact generates a second initiation signal by the tactile sensor36which is also forwarded to the controller described in greater detail in reference toFIGS.7and9. The second contact occurring within the predetermined time interval range of the first contact initiates the operator maximum offset request signal. Referring toFIG.5and again toFIGS.1through4, the operator offset request for automatic lane following system10provides for the vehicle travel path to be returned to the first travel-line18from either the third travel-line44shown or from the second travel-line26shown in reference toFIG.1. In the example shown inFIG.5return travel is by a third lateral offset distance48which is opposite to the maximum second lateral offset distance40. Return travel is via a third displacement path50which is opposite to the second displacement path42. Referring toFIG.6and again toFIGS.1through5, the following procedure may be used to return the vehicle travel path to the first travel-line18from either the third travel-line44shown or from the second travel-line26shown in reference toFIG.1. The operator taps the steering wheel32using the operator's left hand28and a right hand52of the operator at approximately the same time, or within a predetermined time interval. The tapping force34is thereby applied to the left-side surface30contacting the tactile sensor36, and the right hand52contacts a right-side surface54of the steering wheel32using a tapping force56which may be equal to the tapping force34. The right-hand contact is sensed by a second tactile sensor58of the steering wheel32, similar to the tapping force sensed by the tactile sensor36. The steering wheel32may also include a light bar60which illuminates to visually indicate a variety of operator information, including during operation of the operator offset request for automatic lane following system10. Referring toFIG.7and again toFIGS.1through6, the operator offset request for automatic lane following system10includes a method62to interface with an automobile vehicle operator and receive operator selected offset instructions to automatically follow the roadway16. The method includes entering operator inputs64into multiple activation zones66. The activation zones66include at least a first touch-sensing zone68defined as a forward-facing side of the steering wheel32furthest from the operator and closest to a front of the automobile vehicle12. The activation zones66also include at least a second touch-sensing zone70defined as the left-side surface30which includes a rear or operator-facing side of the steering wheel32ranging clockwise from a 6 o'clock position up to the light bar60on the steering wheel32facing opposite to the first touch-sensing zone68and furthest from the front of the automobile vehicle12. The activation zones66also include at least a third touch-sensing zone72defined as the right-side surface54which includes a rear or operator-facing side of the steering wheel32ranging counterclockwise from a 6 o'clock position up to the light bar60on the steering wheel32and furthest from the front of the automobile vehicle12. According to several aspects, the first touch-sensing zone68, the second touch-sensing zone70and the third touch-sensing zone72may include a touch or tactile sensor such as the tactile sensor36and the second tactile sensor58previously described in reference toFIGS.2and6. Signals generated by any of the tactile sensors of the first touch-sensing zone68, the second touch-sensing zone70and the third touch-sensing zone72are forwarded to a determination block74for performance of an operator offset determination step which provides any one of four optional functions including a use maximum offset setting76, a use operator controlled offset ramping setting78, a use vehicle current offset setting80and a reset to default setting82. Output from the determination block74as selected by the operator in the operator offset determination block74is forwarded to a mission planner84. An adjust lane offset modify signal86is generated by the mission planner84according to the selection made by the operator in the determination block74. The adjust lane offset modify signal86is forwarded to a unified lateral controller88which generates a lateral control signal90appropriate to perform one of the first displacement path24, the second displacement path42or the third displacement path50maneuvers. The lateral control signal90varies to generate a lane following torque command92to complete the transition of the automobile vehicle12to one of the second travel-line26, the third travel-line44or to return to the first travel-line18. Referring toFIG.8and again toFIGS.1through7, a graph94identifies an exemplary travel path of the automobile vehicle12including the third displacement path50blended at a merge location96with the original projected first travel-line18of the roadway16defining a blue line data path. The first travel-line18is extended using a target path98generated to blend into an extending map path100available for example using map data or global positioning system (GPS) data. Referring toFIG.9and again toFIGS.1through8, a controller102operating the operator offset request for automatic lane following system10includes a position and heading control unit104which communicates with a curvature control unit106. The curvature control unit106communicates with a steering angle and torque control unit108. The position and heading control unit104receives a vehicle position signal110, a vehicle heading112, and a vehicle path curvature114, and incorporates these with predetermined safety constraints116and a vehicle speed and path signal118to generate a curvature command signal120which is communicated to the curvature control unit106. The curvature control unit106receives the curvature command signal120as well as a measured vehicle curvature signal122. The curvature control unit106incorporates these signals with the predetermined safety constraints116and the vehicle speed and path signal118to generate a steering angle command signal124. The steering angle and torque control unit108receives the steering angle command signal124as well as a steering angle and rate signal126and an operator applied torque signal128. The steering angle and torque control unit108incorporates these signals with the vehicle speed and path signal118and a safety and feel constraints signal130to generate a steering torque command signal132. Referring toFIG.10and again toFIGS.1through9, according to further aspects the operator offset request for automatic lane following system10can be initiated by actuation of an operator input setting system134which includes at least one and according to several aspects multiple switches selectively depressed by the vehicle operator in lieu of tactile sensors. The switches of the operator input setting system134may include an actuation switch136, a first directional selection switch138actuated for example to select a left-hand vehicle position change, and a second directional selection switch140actuated for example to select a right-hand vehicle position change. According to several aspects, the operator may also initiate the operator input setting system134by pressing the actuation switch136followed by manual rotation of the steering wheel32to direct the automobile vehicle12in an operator selected direction. According to several aspects, the operator input setting system134may be located on an operator facing surface of the steering wheel32or may be positioned on a dashboard surface of the automobile vehicle. Upon receipt of an operator's input command, one of multiple command interpretations144are conducted. This is followed by system election of one of multiple control modes146. One of multiple execution modes148is then performed. In an exemplary operation, the operator initiates the operator input setting system134by pressing the actuation switch136a single time. The command interpretation144of the initial pressing of the actuation switch136is an allowance150for the operator to set the off-set distance. In the control mode146for this command an allowance signal152of operator control is generated which may be limited to apply control torque to override the request if a lane crossing is deemed to be imminent. The operator then manually rotates the steering wheel32in a selected direction of offset driving, for example in a counterclockwise direction153shown. When the operator selected offset distance is achieved, the steering wheel32is returned to the default center position and the operator again presses the actuation switch136a single time. The command interpretation144of this action is generation of an achievement signal154signifying the operator's selected offset position for a left-hand offset distance is achieved. The result in the control mode146for this command is generation of a shift signal156to change the current vehicle path or position to the selected path of travel. One of the execution modes148is then performed for example to identify the automobile vehicle12has laterally displaced away from the first travel-line18by the first lateral offset distance22in the first displacement path24until the new or second travel-line26is now achieved. If the operator wishes to cancel offset driving and return to the first travel-line18, the operator depresses the actuation switch136twice. The command interpretation144of double-pressing the actuation switch136is the operator is requesting a return offset158to the default first travel-line18travel path. The election made in the control mode146for this command is an application command signal160to apply assist torque. One of the execution modes148is then performed for example to direct the automobile vehicle12to laterally displace away from the second travel-line26by a lateral offset distance to move the automobile vehicle12in the exemplary third displacement path50until the first travel-line18travel path is achieved. Referring toFIG.11and again toFIGS.1through10, according to other aspects, in addition to the operator input setting system27and the operator input setting system134, according to further aspects the operator offset request for automatic lane following system10can be initiated by actuation of an operator input setting system162which may use displacement of a turn-signal arm164to generate signals indicating the operator's selection of an offset distance. In an exemplary operation of the turn-signal arm164, in an input operation166the operator initiates the operator input setting system134by a tapping input168displacing the turn-signal arm164for example in a downward direction170. In an input processing step172a processing controller area network (CAN) message174is forwarded to the controller102, defining an on-board computer having hardware such as a printed circuit board encoded with software directing the automobile vehicle12how to operate. A command interpretation176of the initial pressing of the actuation switch136is an offset command178to set an offset distance, for example a vehicle left-hand offset distance. A situation awareness180is requested, for example a signal182indicating no side threat on the left side of the automobile vehicle12is received. If the situation awareness180indicates the automobile vehicle12can move to the left, a notification of an offset left active condition186being present is illuminated. One of multiple execution modes184similar to the execution modes148is then performed for example to identify the automobile vehicle12is laterally displacing away from the first travel-line18in the first displacement path24until the new or second travel-line26is reached. If the operator wishes to cancel offset driving and return to the first travel-line18, the operator depresses the turn-signal arm164twice in an upward direction188opposite to the downward direction170. The command interpretation of double-pressing the turn-signal arm164twice is a return offset command for return to the default first travel-line18travel path. The election made in the control mode146for this command is an application command to apply assist torque. One of the execution modes184is then performed for example to direct the automobile vehicle12to laterally displace away from the second travel-line26by a lateral offset distance to move the automobile vehicle12in the third displacement path50until the first travel-line18travel path is achieved. The operator input setting systems27,134,162of the operator offset request for automatic lane following system10temporarily save and hold an operator identified or selected offset distance while automated lane centering features are controlling. The operator input setting systems also adjust the offset distance in response to operator demands while automatic lane centering features are controlling. The operator input setting systems also propagate the selected offset distance to the controller of the control system and further communicate the status of lane centering with offset to the operator through human machine interface (HMI) communications. An operator offset request for automatic lane following system10and method for operation of the present disclosure offers several advantages. These include provision of an intuitive interface allowing a vehicle operator to set and reset a vehicle offset for automated driving applications. An algorithm processes the operator inputs and allows an asymptotically infinite number of offsets within operational constraints. The present system and method provides an interface with the operator through steering wheel touch, manual switches, turn-signal arm and other mechanisms to receive the operator's requested offset for automated driving. The present system and method interprets, executes and communicates the intentional offset and communicates the status of the operator requested offset functionality through human-machine-interface notifications. The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure. | 18,798 |
11858510 | DETAILED DESCRIPTION In a case where a touring assist function is implemented by an existing technique while a vehicle is passing through a construction area in which traffic cones or construction signboards are placed on a lane dividing line or while the vehicle is traveling on a lane separated from an oncoming lane by a lane dividing line on which poles are placed rather than by a center divider, the touring assist function causes the vehicle to travel along a travel course determined on the basis of the lane dividing line. This can cause a driver to feel scared when the vehicle passes by the poles, traffic cones, and construction signboards. It is desirable to provide a vehicle drive assist apparatus that helps prevent a driver from feeling scared while the driver is driving the vehicle. Some example embodiments of the technology will now be described in detail with reference to the accompanying drawings. Note that the following description is directed to illustrative examples of the technology and not to be construed as limiting to the technology. Factors including, without limitation, numerical values, shapes, materials, components, positions of the components, and how the components are coupled to each other are illustrative only and not to be construed as limiting to the technology. Further, elements in the following example embodiments that are not recited in a most-generic independent claim of the technology are optional and may be provided on an as-needed basis. The drawings are schematic and are not intended to be drawn to scale. Throughout the present specification and the drawings, elements having substantially the same function and configuration are denoted with the same numerals to avoid any redundant description. With reference toFIG.1, an own vehicle1may include a drive assist apparatus2. The own vehicle1may be an automobile, for example. The drive assist apparatus2includes a stereo camera3, a stereo image recognizer4, and a processor5, for example. In one embodiment, the stereo camera3may serve as an “external environment recognizer”. In one embodiment, the stereo image recognizer4may serve as a “traveling environment recognizer”. In one embodiment, the processor5may serve as a “processor”. The own vehicle1may include a vehicle speed sensor11that detects an own vehicle speed, a yaw rate sensor12that detects a yaw rate, a main switch13that performs on-off operations of drive assist control functions, a steering angle sensor14that is disposed facing a steering shaft coupled to a steering wheel to detect a steering angle, and an accelerator position sensor15that detects the stepping quantity of an accelerator pedal (i.e., an accelerator position) inputted by a driver. The stereo camera3may include a pair of cameras (e.g., a right camera and a left camera) including a stereo optical system, such as a solid state imaging device. Examples of the solid state imaging device may include a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS). The left and right cameras may be installed on a front portion of the vehicle compartment ceiling at a predetermined distance. These cameras may capture images of a target object present outside the own vehicle1from different point of views to generate stereo images, and output the data on the stereo images to the stereo image recognizer4. Hereinafter, one of the stereo images (e.g., a right image) may be referred to as a reference image, and the other of the stereo images (e.g., a left image) may be referred to as a comparative image. The stereo image recognizer4may divide the reference image into small sections each including 4×4 pixels, and detect respective small sections of the comparative image by comparing the luminance or the color pattern of each small section of the reference image with the luminance or the color pattern of each small section of the comparative image. In this way, the stereo image recognizer4may obtain a distance distribution over the entire reference image. Further, the stereo image recognizer4may detect a luminance difference between each adjacent pixels in the reference image, and extract pixels having a luminance difference greater than a threshold as an edge. The stereo image recognizer4may assign distance data to the extracted pixels (edge). In this way, the stereo image recognizer4may generate a distance image in which the edges each including distance data are distributed. On the basis of the generated distance image, the stereo image recognizer4may recognize lane dividing lines LL and LR (seeFIG.2), road edges, sidewalls, and three-dimensional (3D) objects that are present in front of the own vehicle1. The stereo image recognizer4may assign different IDs to the recognized data items and monitor these data items in sequential frames on an ID basis. For example, the stereo image recognizer4may store the data on road edges, sidewalls, and static 3D objects recognized from the three-dimensional (3D) image data obtained by the stereo camera3into a later-described two-dimensional (2D) grid map constructed in a predetermined region of an own-vehicle coordinate system. Note that the grid map may be constructed in a region having a width of 6 meters along the lateral width of a front portion of the own vehicle1and a length of 40 meters from the front of the own vehicle1, for example. Herein, the “lane dividing lines LL and LR” (seeFIG.2) may be used as a generic term that includes, for example, a single line or a multiple line extending on a road to define a travel lane on which the own vehicle1is traveling (hereinafter referred to as an own-vehicle travel lane). For example, the lane dividing line LL or LR may be a double line that includes a lane dividing line and a line-of-sight guide line lying inside the lane dividing line. These lines may be of any type, such as solid lines or broken lines, and may be of any color such as white or yellow lighter than the color of the road surface or any color deeper than the color of the road surface. If a double line is actually recognized on the road in the recognition of the lane dividing line LL or Lr according to the example embodiment, the double line may be approximated to a left or right single line in a straight or curved form before being recognized. As illustrated inFIG.3, the stereo image recognizer4may recognize the lane dividing line LL or LR by detecting a dividing-line start point P on each search line Jn in a dividing-line search region AL or AR on the basis of a change in luminance on the search line Jn. The dividing-line search regions AL and AR may be set on the image on the basis of the previous processes. The search lines Jn may be set along a horizontal direction (i.e., along the width of the own vehicle1). For example, the stereo image recognizer4may detect the dividing-line start points P, which are edge points of the lane dividing lines, in the left and right dividing-line search regions AL and AR set in the reference image by detecting a change in luminance value between pixels along each search line Jn from inward to outward in the width direction of the own vehicle1. As described above, the stereo image recognizer4according to the example embodiment may serve as a front environment recognizer that recognizes the environment in front of the own vehicle1, a moving 3D object detector, a static 3D object detector, an edge detector (an edge searching unit), an approximate line calculator, a dividing line searching unit, a dividing line calculator, and a detection region setting unit. The information on the traveling environment in front of the own vehicle1recognized by the stereo image recognizer4may be sent to the processor5. Additionally, traveling information on the own vehicle1, such as the vehicle speed detected by the vehicle speed sensor11or the yaw rate detected by the yaw rate sensor12, and driver's operation input information, such as an operational signal outputted from the main switch13, the steering angle detected by the steering angle sensor14, or the accelerator position detected by the accelerator position sensor15, may be transmitted to the processor5. When the driver provides an instruction to execute an adaptive cruise control (ACC) function, which is one of the touring assist functions, by operating the main switch13, for example, the processor5may read the traveling direction of a preceding vehicle recognized by the stereo image recognizer4and determine whether a preceding vehicle to follow is traveling on the own-vehicle travel lane. In a case where no preceding vehicle to follow is detected as a result of the determination, constant-speed traveling control may be executed to keep the vehicle speed of the own vehicle1at the set vehicle speed through switching control of a throttle valve16(engine output control). In contrast, in a case where a preceding vehicle to follow is detected and where the preceding vehicle is traveling at the set vehicle speed or less, following traveling control may be executed to cause the own vehicle1to travel following the preceding vehicle while the inter-vehicular distance between the own vehicle1and the preceding vehicle is converged to a target inter-vehicular distance. During the following traveling control, the processor5may converge the inter-vehicular distance between the own vehicle1and the preceding vehicle to the target inter-vehicular distance through the switching control of the throttle valve16. In a case where the preceding vehicle rapidly decelerates and where it is not determined that the own vehicle1is sufficiently decelerated by the switching control of the throttle valve16alone, the processor5may converge the inter-vehicular distance to the target inter-vehicular distance by pressure control of the liquid outputted from an active booster17(automatic brake intervention control) together with the switching control of the throttle valve16. When the driver provides an instruction to execute a lane-keep assist function, which is one of the touring assist functions, by operating the main switch13, for example, the processor5may set warning determination lines on the basis of the left and right lane dividing lines defining the own-vehicle travel lane, and estimate an own-vehicle travel course on the basis of the vehicle speed and the yaw rate of the own vehicle1, for example. For instance, when determining that the own-vehicle travel course runs across either one of the left and right warning determination lines within a predetermined distance (e.g., 10 to 16 meters) set in front of the own vehicle1, the processor5may determine that the own vehicle1is likely to deviate from the current own-vehicle travel lane, and issue a warning against the deviation from the lane. When the driver provides an instruction to execute an active lane keep centering (ALKC) function, which is one of the touring assist functions, by operating the main switch13, the processor5may set a target travel course in the middle between the left and right lane dividing lines LL and LR defining the own-vehicle travel lane, for example. Thereafter, the processor5may perform traveling control along the target travel course by controlling a steering mechanism of the own vehicle1. Now, the process performed by the stereo image recognizer4to detect and recognize the lane dividing lines LL and LR may be described with reference to the flowchart illustrated inFIG.4and the chart illustrated inFIG.5. In Step S1, the stereo image recognizer4may read the left and right dividing-line search regions AL and AR set in a previous frame. In Step S2, the stereo image recognizer4may detect the edges on the search lines Jn on the lane dividing lines LL and LR, for example (seeFIG.3). For instance, the stereo image recognizer4may detect the dividing-line start points P in the left and right dividing-line search regions AL and AR from inward to outward in the width direction of the own vehicle1with respect to an image center line of the reference image or the own-vehicle traveling direction estimated on the basis of the steering angle, for example. For more detail, when searching each search line Jn for the edge from inward to outward in the width direction of the own vehicle1, the stereo image recognizer4may detect a potential edge point PS and recognize the potential edge point PS as the dividing-line start point P. As illustrated inFIG.5, the potential edge point PS may be a first positive edge point where the luminance of a pixel provided relatively outward in the width direction of the own vehicle1is greater than the luminance of an adjacent pixel provided relatively inward in the width direction of the own vehicle1and where the derivative of the luminance, which indicates the amount of change in the luminance, takes a positive value equal to or greater than a predetermined threshold (luminance threshold). The stereo image recognizer4may recognize the lane dividing lines LL and LR through the steps described above. The lane dividing lines LL and LR may be white lines or yellow lines, for example. When detecting a point where the derivative of the luminance is less than the predetermined threshold, the stereo image recognizer4may remove the point from the candidate points to be recognized as the potential edge point PS and remove the point from the candidate points to be recognized as the dividing-line start point P of the lane dividing line LL or LR. The stereo image recognizer4may determine that the lane dividing line LL or LR has not been recognized in a case where the distance Lh between the potential edge point PS, which is the edge start point where the derivative of the luminance takes a positive value, and the edge end point where the derivative of the luminance takes a negative value is equal or less than a predetermined distance (a predetermined line-width threshold). The predetermined distance may be 7 centimeters, for example. Thereafter, the stereo image recognizer4may combine the potential edge points PS or the dividing-line start points P into a group, and calculate the lane dividing line on the basis of the selected point group (Step S3). For example, the stereo image recognizer4may perform Hough transformation of the selected point group to calculate the lane dividing line LL or LR in Step S3, and recognize the lane dividing line LL or LR as an approximate line Ll or Lr in a linear form (Step S4). On the basis of the calculated lane dividing lines LL and LR (approximate lines Ll and Lr), the stereo image recognizer4may set the dividing-line search regions AL and AR used in the next frame (Step S5), and detect the lane dividing lines LL and LR as described above. In a case where an own-vehicle travel lane R on which the own vehicle1is traveling is under construction or adjacent to an oncoming lane without a center divider therebetween, the own vehicle1may pass through an area in which static (fixed) 3D objects, such as traffic cones21or poles22, are placed on the lane dividing line LL or LR, as illustrated inFIGS.6and7, in some cases. In other cases, the traffic cones21may be placed protruding from the lane dividing line LR toward the own-vehicle travel lane R, as illustrated inFIG.8. Note thatFIGS.6to8illustrate examples in which the static 3D objects are placed on the right lane dividing line LR. The stereo camera3may obtain 3D image data of the traffic cones21and the poles22and output the image data of these static 3D objects to the stereo image recognizer4. As illustrated inFIG.9, for example, the image data may indicate that the static 3D object (e.g., the traffic cone21in this example) recognized as an object has a height of 30 to 40 centimeters. As illustrated inFIGS.10to12, the stereo image recognizer4may generate a detection object model on the basis of the image data. For example, the stereo image recognizer4may generate a rectangular frame (also referred to as a bounding box or window) W1surrounding the shape of the traffic cone21, a rectangular frame W2surrounding the shape of the pole22, or a rectangular frame W3surrounding the shape of a construction signboard23. Thereafter, the stereo image recognizer4may detect the position, distance, width, and height (H) of the static 3D object recognized by tagging or classifying the detected object as a static 3D object and labeling or localizing the coordinate data of the static 3D object to the own vehicle1. For example, the processor5may construct a 2D grid map GM in a predetermined region of an own-vehicle coordinate system extending along the road surface in front of the own vehicle1on the basis of the image data received from the stereo image recognizer4. The predetermined region may have a width of 6 meters in the width direction of the own vehicle1and a length of 40 meters from the front of the own vehicle1, for example. Thereafter, the processor5may store vote grid data of the detected static 3D object into corresponding grid areas of the grid map GM on the basis of the information on the detected static 3D object received from the stereo image recognizer4(refer toFIGS.13to15). The grid areas may be divided sections of the grid map GM and each have 10 centimeter sides, for example. Further, the processor5may constantly monitor whether a static 3D object has been continuously detected on the lane dividing line LL or LR in the grid map GM for a predetermined time or longer and whether vote casting of the data on the detected 3D object has been performed for any of the 10-centimeter grid areas of the grid map GM for the predetermined time or longer. In the example illustrated inFIG.13, the traffic cones21may be placed on the right lane dividing line LR at an interval in the traveling direction of the own vehicle1. In this example, all the traffic cones21are placed on the lane dividing line LR without protruding from the inner side of the lane dividing line LR toward the middle of the own-vehicle travel lane R (i.e., to the own vehicle1). In the example illustrated inFIG.14, the poles22may be placed on the right lane dividing line LR at an interval in the traveling direction of the own vehicle1. All the poles22may be placed on the lane dividing line LR without protruding from the inner side of the lane dividing line LR toward the middle of the own-vehicle travel lane R (i.e., to the own vehicle1). In the example illustrated inFIG.15, the traffic cones21may be placed on the right lane dividing line LR at an interval in the traveling direction of the own vehicle1. Some of the traffic cones21are placed protruding from the inner side of the lane dividing line LR toward the middle of the own-vehicle travel lane R (i.e., to the own vehicle1). With reference to the flowchart illustrated inFIG.16, exemplary control of the own vehicle1will now be described that is performed when the own vehicle1travels through an area in which static 3D objects are placed on the lane dividing line LL or LR while the ALKC function is executed by the processor5of the drive assist apparatus2in the own vehicle1. While the ALKC function is executed, the processor5may first construct the grid map GM in a predetermined region lying in front of the own vehicle1on the basis of the image data received from the stereo image recognizer4. The predetermined region may have a width of 6 meters in the width direction of the own vehicle1and a length of 40 meters from the front of the own vehicle1, for example (Step S11). Alternatively, the grid map GM may be constructed by the stereo image recognizer4, and the grid map GM may be read from the stereo image recognizer4by the processor5. Thereafter, the processor5may determine whether static 3D objects, such as the traffic cones21, the poles22, or the construction signboards23, have been detected on the lane dividing line LL or LR on the basis of the data on the static 3D objects detected by the stereo image recognizer4(Step S12). If the static objects, such as the traffic cones21, the poles22, and the construction signboards23, have not been detected on the lane dividing line LL or LR (Step S12: NO), the processor5may cause the process to exit the routine and return to Step S11. In contrast, if the static objects, such as the traffic cones21, the poles22, and the construction signboards23, have been detected on the lane dividing line LL or LR (Step S2: YES), the processor5may cast a vote for any of the grid areas of the grid map GM corresponding to the detected static object (i.e., may plot a vote grid data in the corresponding grid areas) on the basis of the data on the detected static 3D object (Step S13). For example, as illustrated inFIGS.13to15, the processor5may cast a vote for (plot the vote grid data in) the grid areas of the grid map GM corresponding to the position of the static object on the basis of the data on the static 3D objects, such as the traffic cones21or the poles22. Thereafter, the processor5may determine whether the data on the static 3D object detected by the stereo image recognizer4has been continuously inputted for a predetermined time or longer, and whether vote casting of the static object has been continuously performed for the grid areas of the grid map GM for the predetermined time or longer (Step S14). If the vote casting of the static object has not been continuously performed for the grid areas of the grid map GM for the predetermined time or longer (Step S14: NO), the processor5may cause the process to exit the routine and return to Step S11. If the vote casting of the static object has been continuously performed for the grid areas of the grid map GM for the predetermined time or longer (Step S14: YES), the processor5may identify or select the voted grid closest to the middle of the own-vehicle travel lane R in the width direction of the own vehicle1, from the voted grids (Step S15). For example, the processor may identify or select the voted grid closest to the own vehicle1traveling on the own-vehicle travel lane R when viewed in the width direction of the own vehicle1. Thereafter, the processor5may determine whether the identified voted grid is positioned closer to the middle of the own-vehicle travel lane R than the approximate line Ll or Lr recognized by the stereo image recognizer4is when viewed in the width direction of the own vehicle1(i.e., in the lateral direction) (Step S16). That is, the processor5may determine whether the identified voted grid protrudes toward the own-vehicle travel lane R from the approximate line Ll or Lr recognized as the lane dividing line LL or LR by the stereo image recognizer4. If the identified voted grid is not positioned closer to the middle of the own-vehicle travel lane R than the approximate line Ll or Lr is (Step S16: NO), the processor5may determine a first predetermined correction amount L on the basis of the vehicle speed of the own vehicle1and the distance from the approximate line Ll or Lr to the own vehicle1(Step S17). In the example illustrated inFIG.17, for example, the traffic cones21serving as the static 3D objects are placed on the right lane dividing line LR without protruding from the right lane dividing line LR toward the own-vehicle travel lane R. In this example, the processor5may determine the first correction amount L by reading the first correction amount L from a correction map set on the basis of the vehicle speed of the own vehicle1and the distance from the lane dividing line LL or LR to the own vehicle1. In Step S18, the processor5may correct the lateral position of the approximate line Ll or Lr by adding the first correction amount L to the approximate line Ll or Lr. For example, the processor5may change the approximate line Ll or Lr (e.g., the approximate line Lr of the right lane dividing line LR on which the traffic cones21are placed in the example illustrated inFIG.17) to a virtual approximate line (e.g., a virtual approximate line Lrc in the example illustrated inFIG.17) by shifting the approximate line Ll or Lr toward the middle of the own-vehicle travel lane R in the lateral direction by the first correction amount L. In Step S19, the processor5may determine whether conditions for ALKC are satisfied. In this step, the processor5may determine whether the conditions are satisfied in terms of the vehicle speed of the own vehicle1, the distance between the approximate line Ll or Lr and the corrected approximate line (e.g., the distance between the approximate line L1and the virtual approximate line Lrc in the example illustrated inFIG.17), the distance from a sidewall to the own vehicle1, and the presence or absence of a road edge or a road shoulder. If the conditions for ALKC are satisfied (Step S19: YES), the processor5may change the target travel course (Step S20) and cause the process to exit the routine and return to Step S11. In the example illustrated inFIG.18, the processor5may change a target travel course C1set in the middle between the left and right approximate lines Ll and Lr before the correction to a target travel course C2set in the middle between the left and right approximate lines Ll and Lr (Lrc) after the correction to execute the ALKC control. When the distance between the left approximate line L1and the virtual approximate line Lrc is shortened as described above, the target travel course C2may be newly determined in the middle between the approximate line L1and the virtual approximate line Lrc by shifting the target travel course C1originally set in the lateral direction by a predetermined distance ΔW, as illustrated inFIG.18. The processor5may then perform the ALKC control of the own vehicle1along the target travel course C2obtained through the correction. That is, the processor5may change the target travel course C1set in the middle between the approximate lines Ll and Lr before the correction to the target travel course C2set in the middle of the approximate lines Ll and Lrc after the correction and execute the ALKC control. In contrast, if the identified voted grid is positioned closer to the middle of the own-vehicle travel lane R than the approximate line Ll or Lr is when viewed in the width direction of the own vehicle1(Step S16: YES), the processor5may calculate the distance from the approximate line Ll or Lr on which the static objects are placed to the identified voted grid (Step S21). In the examples illustrated inFIGS.15and19, for example, some of the traffic cones21serving as the static 3D objects are placed protruding from the right lane dividing line LR toward the own-vehicle travel lane R. In this example, the processor5may calculate a detection distance α, which is the amount of protrusion of the static 3D object or the traffic cone21protruding the most toward the own-vehicle travel lane R from the right approximate line Lr of the right lane dividing line LR. In Step S22, the processor5may determine a second correction amount L+α by adding the calculated detection distance (the amount of protrusion) α to the first predetermined correction amount L set on the basis of the vehicle speed of the own vehicle1and the distance from the approximate line Ll or Lr to the own vehicle1, for example. In Step S23, the processor5may correct the lateral position of the approximate line Ll or Lr by adding the second correction amount L+α. For example, the processor5may change the approximate line Ll or Lr (e.g., the right approximate line Lr in the example illustrated inFIG.19) to a virtual approximate line (e.g., a virtual approximate line Lrc+α in the example illustrated inFIG.19) by shifting the approximate line Ll or Lr toward the middle of the own-vehicle travel lane R in the lateral direction by the second correction amount L+α. The processor5may then determine in Step S19whether the conditions for ALKC are satisfied. If the conditions for ALKC are satisfied (Step S19: YES), the processor5may change the target travel course in Step S20and cause the process to exit the routine and return to Step S11. In the example illustrated inFIG.18, the processor5may change the target travel course C1set in the middle between the approximate lines Ll and Lr before the correction to a target travel course C2+α set in the middle between the approximate lines Ll and Lr after the correction to execute the ALKC control. When the distance between the approximate lines Ll and Lr is shortened as described above, the target travel course C2+α may be determined by shifting the target travel course C1originally set in the lateral direction by a predetermined distance ΔW+α. The processor5may perform the ALKC control of the own vehicle1along the target travel course C2. If the conditions for ALKC are not satisfied (Step S19: NO), the processor5may cancel the ALKC control (Step S24). Although the traffic cones21are placed on the right lane dividing line LR in the examples described above, the same control may be executed even when the poles22, the construction signboards23, or the like are detected. In a case where these static 3D objects are placed on the left lane dividing line LL, the target travel course may be shifted to the right. As described above, even when the own vehicle1passes through the area in which the static 3D objects, such as the traffic cones21, the poles22, and the construction signboards23, are placed on or near the lane dividing line LL or LR of the own-vehicle travel lane R while the ALKC control, which is one of the touring assist functions of the own vehicle1, is executed, the processor5of the drive assist apparatus2in the own vehicle1controls the lateral position of the own vehicle1to cause the own vehicle1to travel distant from the static 3D objects. This helps prevent the driver from feeling scared while the driver is driving the own vehicle1. Further, even in a case where the static 3D objects are placed protruding from the lane dividing line LL or LR toward the own-vehicle travel lane R on which the own vehicle1is traveling, the processor5controls the own vehicle1so that the own vehicle1travels along the target travel course C2+α set on the basis of the detection distance α, which is the amount of protrusion. This helps prevent the driver from feeling scared. Note that the processor5may recognize the traffic cones21, the poles22, and the construction signboards23that are placed within a predetermined region covering a region outside the lane dividing line LL or LR as the static 3D objects. For example, the predetermined region may include a region of another lane or a region about 50 centimeters outside the lane dividing line LL or LR. If the conditions for ALKC are not satisfied in the area in which the static 3D objects are placed on or near the lane dividing line LL or LR, the processor5may cancel the ALKC control. This helps prevent the own vehicle1from coming too close to a sidewall, a road edge, a road shoulder, or the like. During the ALKC control, the target travel course is corrected or changed on the basis of the detection of the static 3D objects, such as the traffic cones21, the poles22, and the construction signboards23, as described above. Additionally, in a case where static 3D objects, such as sidewalls, road edges, and road shoulders, that are located far from the own vehicle1are detectable in advance from map data during the ALKC control, the own vehicle1may be controlled to travel distant from the static 3D objects. For example, in a case where the data on the static 3D objects, such as the traffic cones21, the pole22, the construction signboard23, sidewalls, road edges, and road shoulders, are detectable in advance from the map data, the virtual approximate line (e.g., the virtual approximate line Lrc or Lrc+α) may be set by performing a predetermined correction of the lateral position of the approximate line of the lane dividing line LL or LR on the basis of the data on the static 3D objects without performing the exemplary control illustrated in the flowchart ofFIG.16in order to change the target travel course. The stereo image recognizer4and the processor5in the drive assist apparatus2of the own vehicle1may each include a processor including a memory such as a central processing unit (CPU), a read-only memory (ROM), or a random-access memory (RAM). Some or all of the circuits of the processor may be implemented by software. For example, various programs corresponding to various functions stored in the ROM may be read and implemented by the CPU. Alternatively, some or all of the functions of the processor may be implemented by logic circuitry or analog circuitry. Additionally, various programs may be implemented by electronic circuitry such as FPGA. At least one of the stereo image recognizer4and the processor5illustrated inFIG.1is implementable by circuitry including at least one semiconductor integrated circuit such as at least one processor (e.g., a central processing unit (CPU)), at least one application specific integrated circuit (ASIC), and/or at least one field programmable gate array (FPGA). At least one processor is configurable, by reading instructions from at least one machine readable non-transitory tangible medium, to perform all or a part of functions of the stereo image recognizer4and the processor5. Such a medium may take many forms, including, but not limited to, any type of magnetic medium such as a hard disk, any type of optical medium such as a CD and a DVD, any type of semiconductor memory (i.e., semiconductor circuit) such as a volatile memory and a non-volatile memory. The volatile memory may include a DRAM and a SRAM, and the nonvolatile memory may include a ROM and a NVRAM. The ASIC is an integrated circuit (IC) customized to perform, and the FPGA is an integrated circuit designed to be configured after manufacturing in order to perform, all or a part of the functions of the stereo image recognizer4and the processor5illustrated inFIG.1. The technology described above is not limited to the foregoing example embodiments, and various modifications may be made in the implementation stage without departing from the gist of the technology. Further, the foregoing example embodiments each include various stages of the technology, and various technologies may be extracted by appropriately combining the features of the technology disclosed herein. For example, in a case where the above-described concerns may be addressed and the above-described effects may be obtained even if some features are deleted from all the features disclosed herein, the remaining features may be extracted as a technology. | 34,666 |
11858511 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS FIG.1schematically shows a control system100for a motor vehicle (not shown), for outputting a controlled variable u, with the aid of which a directly controlled variable y of a motor vehicle is adjustable via suitable control operations, in order to adapt directly controlled variable y to a reference variable w of the control system. To implement these control operations, the control system is preferably connected to an electrical system of the motor vehicle, using, preferably, at least one bus, preferably, the CAN bus (not shown), so that by actively intervening in on-board systems, such as, in particular, a steering system, brake system, power train and warning systems, directly controlled variable y may be adapted to a reference variable w of the control system. The control system includes a controller110, which is configured to output a first output variable u1on the basis of directly controlled variable y of the motor vehicle, and on the basis of reference variable w of the control system. Controller110of control system100includes, for example, a conventional control algorithm, for example, a PID-type controller. Control system100further includes a predictive model120, which may be trained to output a second output variable u2that reflects a deviation of a driving behavior of a driver of the motor vehicle from first output variable u1of the controller. According to the specific embodiment shown, controlled variable u of control system100encompasses an addition of first output variable u1and second output variable u2. In order to adapt control system100to the driving behavior of an individual driver, then, with the aid of predictive model120, the difference of the driving behavior from current controller110is modeled, and control system100is adapted to the driving behavior of an individual driver, by adding second output variable u2of predictive model120, which reflects the deviation of the driving behavior of a driver of the motor vehicle from first output variable u1of controller110, to first output variable u1of controller110. Control system100is, for example, a driving assistance system, which may be used in a motor vehicle, in order to assist and/or relieve the stress on the driver in certain driving situations, for example, for regulating the distance from a reference object, in particular, a ranging assistance system or a parking assistance system or an assistance system for integrating a vehicle driving at least partially autonomously into a flow of traffic. To control spacing, a distance of the motor vehicle from the reference object is normally adapted to a desired setpoint value, that is, to the reference variable of the control system, using suitable control operations, such as acceleration and/or braking and/or steering actions. By adjusting the controlled variable to the driving behavior of an individual driver, the control operations may be adjusted to the driving behavior, as well. This advantageously increases the acceptance of such systems. In one further preferred specific embodiment of the present invention, the directly controlled variable of the motor vehicle reflects a distance of the motor vehicle from a reference object in a surrounding area of the motor vehicle. The reference object in the surrounding area of the motor vehicle is, for example, a third motor vehicle, in particular, one driving ahead, a pedestrian, an animal or another road user. Alternatively, the reference object may also be a stationary object in the surrounding area, for example, a guardrail, a tree, a pole, a building, or the like. In the same way, a road marking, such as a lane boundary, broken white line, or the like, may also be understood as a reference object, as well. In order to measure the distance of the motor vehicle from the reference object, the motor vehicle preferably includes surround sensors (not shown), such as radar sensors, lidar sensors, laser scanners, video sensors and ultrasonic sensors. If the motor vehicle is equipped with a navigation system, then data of this system may also be accessed. In one further preferred specific embodiment of the present invention, controller110includes a conventional type of controller, in particular, a PID-type controller, and/or predictive model120includes a Gaussian process model or a neural network. In a further preferred specific embodiment of the present invention, predictive model120may be trained to output second output variable u2as a function of at least one input variable; an input variable including one of the following variables: reference variable w of the control system, directly controlled variable y of the motor vehicle, a variable that represents operating data of the motor vehicle and/or surrounding-area data of the motor vehicle. Reference variable w of control system100is the desired setpoint value, to which directly controlled variable y is intended to be adapted. Operating data of the motor vehicle include, for example, speed, acceleration, steering angle, inclination. Surrounding-area data of the motor vehicle include, for example, information about the road condition, weather, grade of the road, course of the road, etc. By utilizing the above-mentioned variables as input variables for predictive model120, second output variable u2may be outputted advantageously as a function of these variables. These variables are advantageously measured by suitable sensors, such as surround sensors, and/or provided to the control system by suitable devices for transmitting data. FIG.2schematically shows steps of a first training phase of a computer-implemented method200for training a predictive model120for a control system100for a motor vehicle, according to the specific embodiments of the present invention; the first training phase including the following steps: in a deactivated state of control system100, ascertaining220a deviation of a driving behavior of a driver of the motor vehicle from first output variable u1of controller110of control system100; and training230predictive model120, using the ascertained deviation of the driving behavior. A deactivated state of control system100is understood to mean that control system100is not used for controlling a driving assistance function, but that the driver of the motor vehicle controls this. In further preferred specific embodiments of the present invention, the first training phase of method200further includes the following steps: ascertaining210athe driving behavior of the driver as a function of directly controlled variable y of the motor vehicle; and computing210bfirst output variable u1of controller110. In light of computed, first output variable u1of controller110and the ascertained driving behavior with a deactivated control system100, the deviation of the driving behavior from first output variable u1of the controller may be ascertained. Predictive model120is advantageously trained, using the ascertained deviation of the driving behavior as a function of directly controlled variable y of the motor vehicle. In further preferred specific embodiments of the present invention, the ascertaining210aof the driving behavior includes the ascertaining of at least one variable, which represents an accelerator pedal action and/or a braking action and/or a steering action. In further preferred specific embodiments of the present invention, the training of predictive model120takes place as a function of at least one further variable, which represents operating data of the motor vehicle and/or surrounding-area data of the motor vehicle. Operating data of the motor vehicle include, for example, speed, acceleration, steering angle, inclination. Surrounding-area data of the motor vehicle include, for example, information about the road condition, weather, grade of the road, course of the road, etc. In one further preferred specific embodiment of the present invention, a second training phase of the method includes: optimizing the predictive model as a function of at least one further variable, which is associated with a reference object in a surrounding area of the motor vehicle. The reference object is, for example, a third vehicle, in particular, one driving ahead. By optimizing predictive model120with regard to the reference object, predictive model120may be optimized advantageously with regard to a future position of the reference object. In one further preferred specific embodiment (FIG.3) for the present invention, the optimizing (240) of predictive model120further includes: ascertaining (240a) a state of the motor vehicle at a time t, including at least one variable, which is associated with the motor vehicle; ascertaining (240b) a state of the reference object at time t, including at least one variable, which is associated with the reference object; and ascertaining (240c) a distribution over future states and identifying (240d) at least one model parameter, which minimizes the expected value of an error in the distribution over the future states. The model parameter characterizes an association between input variables and output variables of predictive model120. In this manner, the formation of a prediction error that accumulates in the long term may be advantageously prevented. In particular, an error that accumulates long-term may be formed, if predictive model120is not able to reflect the deviation of the driving behavior accurately. FIG.5shows a schematic overall view of the second training phase for optimizing predictive model120as a function of at least one further variable, which is associated with a reference object in a surrounding area of the motor vehicle. The variable, which is associated with a reference object in a surrounding area of the motor vehicle, is given by a further, second predictive model130, which is suitable for predicting a state of the reference object. A further, third predictive model140combines controller110and predictive model120and, thus, is suitable for predicting the state of the motor vehicle. xtownrepresents the state of the motor vehicle at time t. xtownadvantageously includes all of the variables, which are made available to predictive model120and controller110. xtleadrepresents the state, in particular, information about the position and/or speed, of the reference object, for example, a third vehicle driving ahead, at time t. The distance from this reference object at time t is also supplied to predictive model120and controller110. If at least one of the predictive models120,130,140or controller110is a stochastic model, then a distribution over future states may be derived from it; the distribution being given by p(xt+1own,xt+1lead,xt+2own,xt+2lead, . . . |xtown,xtlead,θ). An error in the future states at time t+δ is given by L(xt+δown,xt+δlead). An error measures, for example, a difference from the reference variable and/or an exceedance and/or undershooting of maximum or minimum allowable differences. A model parameter, which minimizes the expected value of the error, solves the following optimization problem θ=argminθ[Σδ=1TmaxL(xt+δown,xt+δlead)|p(xt+1own,xt+1lead,xt+2own,xt+2lead, . . . |xtown,xtlead,θ)], where Tmaxdescribes the maximum prediction horizon. The identified model parameter minimizes the accumulated error of time step Tmax. Predictive model120is advantageously optimized on this basis. In one further preferred specific embodiment of the present invention, a third training phase of method200includes: in the activated state of the control system, testing250the predictive model in comparison with an action of the driver. A schematic depiction of steps of the third training phase of computer-implemented method200is shown inFIG.4. In one further preferred specific embodiment of the present invention, the first and/or the second training phase are repeated, and/or further steps, in particular, deactivation260aof control system100and/or outputting260bof a warning, are executed as a function of the testing250of predictive model120. Further preferred specific embodiments of the present invention relate to a computer program, which is configured to execute the steps of the method200according to the specific embodiments. Further preferred specific embodiments of the present invention relate to a machine-readable storage medium, in which the computer program according to the specific embodiments is stored. Further preferred specific embodiments of the present invention relate to a control unit300, which is configured to execute the steps of a method200according to the specific embodiments of the present invention. Control unit300includes a computing device310and at least one storage device320, in which control system100is stored. In addition, control unit300includes an input330for receiving information about variables of the control system, such as a reference variable and directly controlled variable, and additional variables, which represent the operating data of the motor vehicle and/or surrounding-area data of the motor vehicle. These variables are advantageously measured by suitable sensors, such as surround sensors, and/or provided to the control system by suitable devices for transmitting data. Furthermore, control unit300includes an output340for controlling actuators of on-board systems of the motor vehicle, in particular, a steering system, brake system, the power train, and warning systems. Further preferred specific embodiments of the present invention relate to use of a control system100according to the specific embodiments, and/or of a predictive model120that is trained by a method200according to the specific embodiments, and/or of a method according to the specific embodiments, and/or of a computer program according to the specific embodiments, and/or of a machine-readable storage medium according to the specific embodiments, and/or of a control unit300according to the specific embodiments, for adapting a control system100for a motor vehicle to an individual driving behavior of a driver. Further preferred specific embodiments of the present invention relate to use of a control system100according to the specific embodiments of the present invention, and/or of a predictive model120that is trained by a method200according to the specific embodiments of the present invention, and/or of a method200according to the specific embodiments of the present invention, and/or of a computer program according to the specific embodiments of the present invention, and/or of a machine-readable storage medium according to the specific embodiments of the present invention, and/or of a control unit300according to the specific embodiments of the present invention, in a driving assistance system of a motor vehicle, in particular, for adaptive cruise control (ACC). | 14,865 |
11858512 | DETAILED DESCRIPTION A description will hereinafter be made on a controller according to the present invention with reference to the drawings. Hereinafter, a description will be made on the controller used for a two-wheeled motorcycle. However, the controller according to the present invention may be used for a straddle-type vehicle other than the two-wheeled motorcycle (for example, a three-wheeled motorcycle, an all-terrain vehicle, a bicycle, or the like). The straddle-type vehicle means a vehicle that a driver straddles. In addition, a description will hereinafter be made on a case where an engine is mounted as a drive source capable of outputting drive power for driving a wheel of the motorcycle. However, as the drive source of the motorcycle, a drive source other than the engine (for example, a motor) may be mounted, or multiple drive sources may be mounted. Furthermore, a description will hereinafter be made on a case where the motorcycle is a rear-wheel drive vehicle. However, the motorcycle may be a front-wheel-drive vehicle, and a reference braking force may be generated on a front wheel. A configuration, operation, and the like, which will be described below, merely constitute one example. The controller and the control method according to the present invention are not limited to a case with such a configuration, such operation, and the like. The same or similar description will appropriately be simplified or will not be made below. In the drawings, the same or similar members or portions will not be denoted by a reference sign or will be denoted by the same reference sign. In addition, a detailed structure will appropriately be illustrated in a simplified manner or will not be illustrated. <Configuration of Motorcycle> A description will be made on a configuration of a motorcycle100on which a controller60according to an embodiment of the present invention is mounted with reference toFIG.1toFIG.3. FIG.1is a schematic view of the configuration of the motorcycle100on which the controller60is mounted.FIG.2is a schematic diagram of a configuration of a brake system10.FIG.3is a block diagram of an exemplary functional configuration of the controller60. As illustrated inFIG.1, the motorcycle100includes: a trunk1; a handlebar2that is held by the trunk1in a freely turnable manner; a front wheel3that is held by the trunk1in the freely turnable manner with the handlebar2; a rear wheel4that is held by the trunk1in a freely rotatable manner; an engine5; a transmission mechanism6; and the brake system10. In this embodiment, the controller (ECU)60is provided in a hydraulic pressure control unit50of the brake system10, which will be described later. As illustrated inFIG.1andFIG.2, the motorcycle100further includes: an inter-vehicular distance sensor41, an input device42, a front-wheel rotational frequency sensor43, a rear-wheel rotational frequency sensor44, a torque sensor45, a crank angle sensor46, a gear position sensor47, a master-cylinder pressure sensor48, and a wheel-cylinder pressure sensor49. The engine5corresponds to an example of a drive source for the motorcycle100, and can output drive power for driving a wheel (more specifically, the rear wheel4as a drive wheel). For example, the engine5is provided with: one or multiple cylinders in each of which a combustion chamber is formed; a fuel injector that injects fuel into the combustion chamber; and an ignition plug. When the fuel is injected from the fuel injector, air-fuel mixture containing air and the fuel is produced in the combustion chamber, and the air-fuel mixture is then ignited by the ignition plug and burned. Consequently, a piston provided in the cylinder reciprocates to cause a crankshaft to rotate. In addition, a throttle valve is provided in an intake pipe of the engine5, and an intake air amount for the combustion chamber varies according to a throttle opening amount as an opening degree of the throttle valve. The crankshaft of the engine5is connected to an input shaft of the transmission mechanism6, and an output shaft of the transmission mechanism6is connected to the rear wheel4. Thus, the power output from the engine5is transmitted to the transmission mechanism6, is changed by the transmission mechanism6, and is then transmitted to the rear wheel4. In detail, the crankshaft of the engine5and the input shaft of the transmission mechanism6are connected via a clutch that connects/disconnects the power transmission. When the clutch is operated, a gear stage of the transmission mechanism6is switched according to a shift lever operation by the driver in a disengaged state of the clutch. As illustrated inFIG.1andFIG.2, the brake system10includes: a first brake operation section11; a front-wheel brake mechanism12that brakes the front wheel3in an interlocking manner with at least the first brake operation section11; a second brake operation section13; and a rear-wheel brake mechanism14that brakes the rear wheel4in an interlocking manner with at least the second brake operation section13. The brake system10also includes the hydraulic pressure control unit50, and a part of the front-wheel brake mechanism12and a part of the rear-wheel brake mechanism14are included in the hydraulic pressure control unit50. The hydraulic pressure control unit50is a unit that has a function of controlling a braking force to be generated on the front wheel3by the front-wheel brake mechanism12and a braking force to be generated on the rear wheel4by the rear-wheel brake mechanism14. The first brake operation section11is provided on the handlebar2and is operated by the driver's hand. The first brake operation section11is a brake lever, for example. The second brake operation section13is provided in a lower portion of the trunk1and is operated by the driver's foot. The second brake operation section13is a brake pedal, for example. Each of the front-wheel brake mechanism12and the rear-wheel brake mechanism14includes: a master cylinder21in which a piston (not illustrated) is installed; a reservoir22that is attached to the master cylinder21; a brake caliper23that is held by the trunk1and has a brake pad (not illustrated); a wheel cylinder24that is provided in the brake caliper23; a primary channel25through which a brake fluid in the master cylinder21flows into the wheel cylinder24; a secondary channel26through which the brake fluid in the wheel cylinder24is released; and a supply channel27through which the brake fluid in the master cylinder21is supplied to the secondary channel26. An inlet valve (EV)31is provided in the primary channel25. The secondary channel26bypasses a portion of the primary channel25between the wheel cylinder24side and the master cylinder21side from the inlet valve31. The secondary channel26is sequentially provided with an outlet valve (AV)32, an accumulator33, and a pump34from an upstream side. Between an end of the primary channel25on the master cylinder21side and a portion of the primary channel25to which a downstream end of the secondary channel26is connected, a first valve (USV)35is provided. The supply channel27communicates between the master cylinder21and a portion of the secondary channel26on a suction side of the pump34. A second valve (HSV)36is provided in the supply channel27. The inlet valve31is an electromagnetic valve that is opened in an unenergized state and closed in an energized state, for example. The outlet valve32is an electromagnetic valve that is closed in an unenergized state and opened in an energized state, for example. The first valve35is an electromagnetic valve that is opened in an unenergized state and is closed in an energized state, for example. The second valve36is an electromagnetic valve that is closed in an unenergized state and is opened in an energized state, for example. The hydraulic pressure control unit50includes: components such as the inlet valves31, the outlet valves32, the accumulators33, the pumps34, the first valves35, and the second valves36used to control a brake hydraulic pressure; a base body51in which those components are provided and channels constituting the primary channels25, the secondary channels26, and the supply channels27are formed; and the controller60. The base body51may be formed of one member or may be formed of multiple members. In the case where the base body51is formed of the multiple members, the components may separately be provided in the different members. The controller60controls operation of each of the components in the hydraulic pressure control unit50. As a result, the braking force to be generated on the front wheel3by the front-wheel brake mechanism12and the braking force to be generated on the rear wheel4by the rear-wheel brake mechanism14are controlled. For example, in a normal time (that is, when none of adaptive cruise control and anti-lock brake control, which will be described later, is executed), the controller60opens the inlet valves31, closes the outlet valves32, opens the first valves35, and closes the second valves36. When the first brake operation section11is operated in such a state, in the front-wheel brake mechanism12, the piston (not illustrated) in the master cylinder21is pressed to increase a hydraulic pressure of the brake fluid in the wheel cylinder24, the brake pad (not illustrated) of the brake caliper23is then pressed against a rotor3aof the front wheel3, and the braking force is thereby generated on the front wheel3. Meanwhile, when the second brake operation section13is operated, in the rear-wheel brake mechanism14, the piston (not illustrated) in the master cylinder21is pressed to increase the hydraulic pressure of the brake fluid in the wheel cylinder24, the brake pad (not illustrated) of the brake caliper23is then pressed against a rotor4aof the rear wheel4, and the braking force is thereby generated on the rear wheel4. The inter-vehicular distance sensor41detects a distance from the motorcycle100to a preceding vehicle. The inter-vehicular distance sensor41may detect another physical quantity that can substantially be converted to the distance from the motorcycle100to the preceding vehicle. Here, the preceding vehicle means a vehicle ahead of the motorcycle100and may include, in addition to the nearest vehicle from the motorcycle100on the same lane as a travel lane of the motorcycle100, a vehicle ahead of several vehicles in front of the motorcycle100, a vehicle traveling on an adjacent lane to the travel lane of the motorcycle100, and the like. For example, in the case where the multiple vehicles exist ahead of the motorcycle100, based on a track, which is estimated as a travel track of the motorcycle100, and behavior of each of the multiple vehicles, the inter-vehicular distance sensor41selects the preceding vehicle as a detection target of the distance from the motorcycle100. In this case, the adaptive cruise control, which will be described later, is executed by using a detection result of the distance from the motorcycle100to the thus-selected preceding vehicle. As the inter-vehicular distance sensor41, for example, a camera that captures an image in front of the motorcycle100and a radar that can detect a distance from the motorcycle100to a target in front are used. In such a case, for example, the preceding vehicle is recognized by using the image captured by the camera. Then, by using the recognition result of the preceding vehicle and a detection result by the radar, the distance from the motorcycle100to the preceding vehicle can be detected. The inter-vehicular distance sensor41is provided in a front portion of the trunk1, for example. Note that the configuration of the inter-vehicular distance sensor41is not limited to the above example, and a stereo camera may be used as the inter-vehicular distance sensor41, for example. The input device42accepts a travel mode selection operation by the driver, and outputs information indicative of the travel mode selected by the driver. As will be described later, in the motorcycle100, the controller60can execute the adaptive cruise control. The adaptive cruise control is control in which the motorcycle100is made to travel according to the distance from the motorcycle100to the preceding vehicle, motion of the motorcycle100, and the driver's instruction. By using the input device42, the driver can select, as one of the travel modes, a travel mode in which the adaptive cruise control is executed. For example, as the input device42, a lever, a button, a touch screen, or the like is used. The input device42is provided on the handlebar2, for example. The front-wheel rotational frequency sensor43detects a rotational frequency of the front wheel3and outputs a detection result. The front-wheel rotational frequency sensor43may detect another physical quantity that can substantially be converted to the rotational frequency of the front wheel3. The front-wheel rotational frequency sensor43is provided on the front wheel3. The rear-wheel rotational frequency sensor44detects a rotational frequency of the rear wheel4and outputs a detection result. The rear-wheel rotational frequency sensor44may detect another physical quantity that can substantially be converted to the rotational frequency of the rear wheel4. The rear-wheel rotational frequency sensor44is provided on the rear wheel4. The torque sensor45detects torque acting on the rear wheel4and outputs a detection result. The torque sensor45may detect another physical quantity that can substantially be converted to the torque acting on the rear wheel4. The torque sensor45is provided on the rear wheel4. The crank angle sensor46detects a crank angle of the engine5and outputs a detection result. The crank angle sensor46may detect another physical quantity that can substantially be converted to the crank angle of the engine5. The crank angle sensor46is provided in the engine5. The gear position sensor47detects at which gear stage the gear stage of the transmission mechanism6is set, and outputs a detection result. The gear position sensor47is provided in the transmission mechanism6. The master-cylinder pressure sensor48detects the hydraulic pressure of the brake fluid in the master cylinder21, and outputs a detection result. The master-cylinder pressure sensor48may detect another physical quantity that can substantially be converted to the hydraulic pressure of the brake fluid in the master cylinder21. The master-cylinder pressure sensor48is provided in each of the front-wheel brake mechanism12and the rear-wheel brake mechanism14. The wheel-cylinder pressure sensor49detects the hydraulic pressure of the brake fluid in the wheel cylinder24, and outputs a detection result. The wheel-cylinder pressure sensor49may detect another physical quantity that can substantially be converted to the hydraulic pressure of the brake fluid in the wheel cylinder24. The wheel-cylinder pressure sensor49is provided in each of the front-wheel brake mechanism12and the rear-wheel brake mechanism14. The controller60controls travel of the motorcycle100. For example, the controller60is partially or entirely constructed of a microcomputer, a microprocessor unit, or the like. Alternatively, the controller60may partially or entirely be constructed of a member in which firmware or the like can be updated, or may partially or entirely be a program module or the like that is executed by a command from a CPU or the like, for example. The controller60may be provided as one unit or may be divided into multiple units, for example. As illustrated inFIG.3, the controller60includes an acquisition section61and a control section62, for example. The acquisition section61acquires information that is output from each of the devices mounted on the motorcycle100, and outputs the acquired information to the control section62. For example, the acquisition section61acquires the information output from the inter-vehicular distance sensor41, the input device42, the front-wheel rotational frequency sensor43, the rear-wheel rotational frequency sensor44, the torque sensor45, the crank angle sensor46, the gear position sensor47, the master-cylinder pressure sensor48, and the wheel-cylinder pressure sensor49. The control section62controls operation of each of the devices mounted on the motorcycle100, so as to control the drive power and the braking force exerted on the motorcycle100. Here, by controlling the operation of each of the devices mounted on the motorcycle100, the control section62can execute the adaptive cruise control in which the motorcycle100is made to travel according to the distance from the motorcycle100to the preceding vehicle, the motion of the motorcycle100, and the driver's instruction. More specifically, in the case where the driver selects the travel mode in which the adaptive cruise control is executed, the control section62executes the adaptive cruise control. Note that, in the case where the driver performs an accelerator operation or a brake operation during the adaptive cruise control, the control section62cancels the adaptive cruise control. In the adaptive cruise control, the distance from the motorcycle100to the preceding vehicle is controlled to approximate a reference distance. As the distance from the motorcycle100to the preceding vehicle, the reference distance is set to a value with which the driver's safety can be secured. In the case where no preceding vehicle is recognized, a speed of the motorcycle100is controlled at a set speed, which is set in advance. In addition, in the adaptive cruise control, each of the acceleration and the deceleration of the motorcycle100is controlled to be equal to or lower than an upper limit value of such extent that does not worsen the driver's comfort. More specifically, during the adaptive cruise control, the control section62calculates a target value of the acceleration (hereinafter referred to as target acceleration) or a target value of the deceleration (hereinafter referred to as target deceleration) on the basis of a comparison result between the distance from the motorcycle100to the preceding vehicle and the reference distance and on the basis of a relative speed between the motorcycle100and the preceding vehicle. Then, based on a calculation result, the control section62controls the drive power and the braking force exerted on the motorcycle100. For example, in the case where the distance from the motorcycle100to the preceding vehicle is longer than the reference distance, the control section62calculates the target acceleration that corresponds to a difference between the distance from the motorcycle100to the preceding vehicle and the reference distance. On the other hand, in the case where the distance from the motorcycle100to the preceding vehicle is shorter than the reference distance, the control section62calculates the target deceleration that corresponds to the difference between the distance from the motorcycle100to the preceding vehicle and the reference distance. The control section62includes a drive control section62aand a brake control section62b, for example. The drive control section62acontrols the drive power that is transmitted to the rear wheel4as the drive wheel during the adaptive cruise control. More specifically, during the adaptive cruise control, the drive control section62aoutputs a command to an engine control unit (not illustrated), which outputs a signal to control operation of each of the components of the engine5(the throttle valve, the fuel injector, the ignition plug, and the like). In this way, the drive control section62acontrols operation of the engine5. As a result, during the adaptive cruise control, the drive power, which is output from the engine5and transmitted to the rear wheel4, is controlled. In the normal time, the operation of the engine5is controlled by the engine control unit such that the drive power is transmitted to the rear wheel4in response to the driver's accelerator operation. Meanwhile, during the adaptive cruise control, the drive control section62acontrols the operation of the engine5such that the drive power is transmitted to the rear wheel4without relying on the driver's accelerator operation. More specifically, during the adaptive cruise control, the drive control section62acontrols the operation of the engine5such that the acceleration of the motorcycle100becomes the target acceleration, which is calculated on the basis of the distance from the motorcycle100to the preceding vehicle and the relative speed between the motorcycle100and the preceding vehicle. In this way, the drive control section62acontrols the drive power transmitted to the rear wheel4. The brake control section62bcontrols the operation of each of the components of the hydraulic pressure control unit50in the brake system10, so as to control the braking force generated on each of the wheels of the motorcycle100. In the normal time, as described above, the brake control section62bcontrols the operation of each of the components of the hydraulic pressure control unit50such that the braking force is generated on each of the wheels in response to the driver's brake operation. Meanwhile, during the adaptive cruise control, the brake control section62bcontrols the operation of each of the components such that the braking force is generated on each of the wheels without relying on the driver's brake operation. More specifically, during the adaptive cruise control, the brake control section62bcontrols the operation of each of the components of the hydraulic pressure control unit50such that the deceleration of the motorcycle100becomes the target deceleration, which is calculated on the basis of the distance from the motorcycle100to the preceding vehicle and the relative speed between the motorcycle100and the preceding vehicle. In this way, the brake control section62bcontrols the braking force generated on each of the wheels. For example, during the adaptive cruise control, the brake control section62bbrings the motorcycle100into a state where the inlet valves31are opened, the outlet valves32are closed, the first valves35are closed, and the second valves36are opened, and drives the pumps34in such a state, so as to increase the hydraulic pressure of the brake fluid in each of the wheel cylinders24and generate the braking force on each of the wheels. In addition, the brake control section62bregulates the hydraulic pressure of the brake fluid in each of the wheel cylinders24by controlling an opening amount of the first valve35, for example. In this way, the brake control section62bcan control the braking force generated on each of the wheels. Here, during the adaptive cruise control, the brake control section62bseparately controls operation of each of the front-wheel brake mechanism12and the rear-wheel brake mechanism14, so as to separately control the hydraulic pressure of the brake fluid in the wheel cylinder24of each of the front-wheel brake mechanism12and the rear-wheel brake mechanism14. In this way, the brake control section62bcan control braking force distribution between the front and rear wheels (that is, distribution of the braking force generated on the front wheel3and the braking force generated on the rear wheel4). More specifically, the brake control section62bcontrols the braking force distribution between the front and rear wheels such that a total of target values of the braking forces generated on the wheels becomes a requested braking force (that is, the braking force that is requested at the time of braking during the adaptive cruise control) corresponding to the target deceleration. The requested braking force is specifically the required braking force to bring the deceleration of the motorcycle100to the target deceleration, which is calculated on the basis of the distance from the motorcycle100to the preceding vehicle and the relative speed between the motorcycle100and the preceding vehicle. Note that, in the case where at least one of the wheels is locked or possibly locked, the brake control section62bmay execute the anti-lock brake control. The anti-lock brake control is control for regulating the braking force of the wheel, which is locked or possibly locked, to such a magnitude that locking of the wheel can be avoided. For example, during the anti-lock brake control, the brake control section62bbrings the motorcycle100into a state where the inlet valves31are closed, the outlet valves32are opened, the first valves35are opened, and the second valves36are closed, and drives the pumps34in such a state, so as to reduce the hydraulic pressure of the brake fluid in each of the wheel cylinders24and reduce the braking force generated on each of the wheels. In addition, the brake control section62bcloses both of the inlet valves31and the outlet valves32from the above state, for example. In this way, the brake control section62bcan keep the hydraulic pressure of the brake fluid in each of the wheel cylinders24and thus can keep the braking force generated on the each of wheels. Furthermore, the brake control section62bopens the inlet valves31and closes the outlet valves32from the above state, for example. In this way, the brake control section62bcan increase the hydraulic pressure of the brake fluid in each of the wheel cylinders24and thus can increase the braking force generated on each of the wheels. As described above, in the controller60, the control section62can execute the adaptive cruise control. Here, during the adaptive cruise control, when a state where the braking force is generated on each of the wheels of the motorcycle100(hereinafter also referred to as a decelerated state) is switched to a state where the rear wheel4is driven using the drive power output from the engine5of the motorcycle100(hereinafter also referred to as an accelerated state), the control section62controls the braking force generated on each of the wheels such that the reference braking force is generated on the rear wheel4at a time point at which the rear wheel4starts being driven due to the transmission of the drive power output from the engine5to the rear wheel4. In this way, the driver's comfort can be secured during the adaptive cruise control for the motorcycle100. A detailed description will be made below on such processing that is related to switching from the decelerated state to the accelerated state during the adaptive cruise control and is executed by the controller60. Note that the above decelerated state may include a state where the braking force is generated on each of the wheels due to action of engine brake, in addition to the state where the braking force is generated on each of the wheels by controlling the operation of each of the components of the hydraulic pressure control unit50in the brake system10. The description has been made above on the example in which the drive control section62acontrols the operation of the engine5via the engine control unit. However, the drive control section62amay output a signal for controlling the operation of each of the components of the engine5, so as to directly control the operation of each of the components of the engine5. In such a case, the drive control section62acontrols the operation of the engine5in the normal time in a similar manner to the operation of the engine5during the adaptive cruise control. <Operation of Controller> A description will be made on operation of the controller60according to the embodiment of the present invention with reference toFIG.4. FIG.4is a flowchart of an exemplary processing procedure that is executed by the controller60. More specifically, a control flow illustrated inFIG.4corresponds to a processing procedure that is related to switching from the decelerated state to the accelerated state during the adaptive cruise control and is executed by the control section62of the controller60, and is repeatedly executed during the adaptive cruise control. In addition, step S510and step S590inFIG.4respectively correspond to initiation and termination of the control flow illustrated inFIG.4. When the control flow illustrated inFIG.4is initiated, in step S511, the control section62determines whether the wheels of the motorcycle100are braked. If it is determined that the wheels of the motorcycle100are braked (step S511/YES), the processing proceeds to step S513. On the other hand, if it is determined that the wheels of the motorcycle100are not braked (step S511/NO), the determination processing in step S511is repeated. If it is determined YES in step S511, in step S513, the control section62determines whether the engine5starts outputting the drive power. If it is determined that the engine5starts outputting the drive power (step S513/YES), the processing proceeds to step S515. On the other hand, if it is determined that the engine5does not start outputting the drive power (step S513/NO), the processing returns to the determination processing in step S511. For example, the control section62determines that the engine5starts outputting the drive power in the case where a request to start accelerating the motorcycle100is generated and where a command that causes the engine5to start outputting the drive power is output to the engine control unit. Note that the control section62may determine whether the engine5starts outputting the drive power by a different method from the above. For example, the control section62may determine whether the engine5starts outputting the drive power on the basis of a temporal change in a parameter such as an engine speed, a fuel injection amount, or the like. If it is determined YES in step S513, in step S515, the brake control section62bmakes the brake system10start applying the reference braking force to the rear wheel4. For example, the reference braking force is set to the braking force of such extent that can maintain acceleration performance of the motorcycle100to desired performance while alleviating a shock generated at the time point at which the rear wheel4starts being driven. In step S515, more specifically, the brake control section62bstops the application of the braking force to the front wheel3, and then starts the application of the reference braking force to the rear wheel4. However, for example, the braking force may temporarily be generated on the front wheel3after a time point at which it is determined that the engine5starts outputting the drive power. As described above, at the time point, at which it is determined that the engine5starts outputting the drive power, onward, the brake control section62bcontinuously generates the reference braking force on the rear wheel4. As will be described below, the state where the reference braking force is generated on the rear wheel4continues to the time point at which the rear wheel4starts being driven due to the transmission of the drive power output from the engine5to the rear wheel4. Thus, the reference braking force is generated on the rear wheel4at the time point at which the rear wheel4starts being driven. More specifically, the drive power output from the engine5is transmitted to the rear wheel4via a power transmission system including the transmission mechanism6. When the drive power is transmitted to the rear wheel4at the time of starting driving of the rear wheel4, the shock occurs due to backlash of a gear in the power transmission system or the like, for example. Here, as described above, at the time point at which the rear wheel4starts being driven, the controller60causes the generation of the reference braking force on the rear wheel4. Thus, it is possible to alleviate the shock that occurs due to the transmission of the drive power at the time point at which the rear wheel4starts being driven. Preferably, from a perspective of alleviating the shock, which occurs at the time point of starting driving of the rear wheel4, the brake control section62bappropriately sets the above reference braking force. For example, from a perspective of appropriately alleviating the shock, which occurs at the time point of starting driving of the rear wheel4, the brake control section62bpreferably controls the reference braking force to such a magnitude that corresponds to a gear ratio of the transmission mechanism6. In addition, for example, from a perspective of further appropriately alleviating the shock, which occurs at the time point of starting driving of the rear wheel4, the brake control section62bpreferably controls the reference braking force to such a magnitude that corresponds to the drive power output from the engine5. Here, the brake control section62bmay control the reference braking force on the basis of the multiple parameters (for example, both of the gear ratio of the transmission mechanism6and the drive power output from the engine5). Next, in step S517, the control section62determines whether the rear wheel4starts being driven due to the transmission of the drive power output from the engine5to the rear wheel4. If it is determined that the rear wheel4starts being driven (step S517/YES), the processing proceeds to step S519. On the other hand, if it is determined that the rear wheel4does not start being driven (step S517/NO), the determination processing in step S517is repeated. For example, the control section62determines that the rear wheel4starts being driven in the case where the torque acting on the rear wheel4starts being increased. Such a determination can be made by using the detection result by the torque sensor45. Alternatively, the control section62determines that the rear wheel4starts being driven in the case where rotational acceleration of the rear wheel4starts being increased. Such a determination can be made by using the detection result by the rear-wheel rotational frequency sensor44. If it is determined YES in step S517, in step S519, the brake control section62bmakes the brake system10stop applying the reference braking force to the rear wheel4. As described above, the brake control section62bstops the generation of the reference braking force on the rear wheel4in the case where it is determined that the rear wheel4starts being driven. Next, the control flow illustrated inFIG.4is terminated. As described above, in the control flow illustrated inFIG.4, at the time of switching from the decelerated state to the accelerated state during the adaptive cruise control, the control section62controls the braking force generated on each of the wheels such that the reference braking force is generated on the rear wheel4at the time point at which the rear wheel4starts being driven due to the transmission of the drive power output from the engine5to the rear wheel4. The above description has been made on the example in which the reference braking force is generated on the rear wheel4at the time point at which it is determined that the engine5starts outputting the drive power. However, the time point at which the reference braking force is generated on the rear wheel4(that is, the time point at which the reference braking force starts being applied to the rear wheel4) is not limited to the above example. More specifically, the brake control section62bmay cause the generation of the reference braking force on the rear wheel4at a time point corresponding to the time point at which the engine5starts outputting the drive power. For example, the brake control section62bmay cause the generation of the reference braking force on the rear wheel4at a time point later than the time point, at which the engine5starts outputting the drive power, by first reference duration. For example, the first reference duration is set to shorter duration than average duration that is assumed as duration from the time point at which the engine5starts outputting the drive power to the time point at which the rear wheel4starts being driven. Note that, as the time point at which the engine5starts outputting the drive power, for example, the time point at which the engine5starts outputting the drive power may be used, or a time point at which duration corresponding to a delay in communication between the devices, responsiveness of each of the components in the engine5, or the like is added to the above time point may be used. In addition, the brake control section62bmay estimate the time point at which the rear wheel4starts being driven, and may cause the generation of the reference braking force on the rear wheel4at a time point corresponding to the estimated time point. For example, the brake control section62bmay cause the generation of the reference braking force on the rear wheel4at a time point prior to the time point, which is estimated as the time point at which the rear wheel4starts being driven, by second reference duration. For example, the second reference duration is set to such duration that the reliable generation of the reference braking force on the rear wheel4is maintained at the time point at which the rear wheel4starts being driven. Note that the brake control section62bcan estimate the time point at which the rear wheel4starts being driven by estimating the duration that takes until the rear wheel4starts being driven on the basis of the crank angle of the engine5, for example. Such estimation can be made by using the detection result by the crank angle sensor46. The above description has been made on the example in which the generation of the reference braking force on the rear wheel4is stopped in the case where it is determined that the rear wheel4starts being driven. However, the time point at which the generation of the reference braking force on the rear wheel4is stopped (that is, the time point at which the reference braking force stops being applied to the rear wheel4) is not limited to the above example. More specifically, the brake control section62bmay estimate the time point at which the rear wheel4starts being driven, and may stop the generation of the reference braking force on the rear wheel4at a time point corresponding to the estimated time point. For example, the brake control section62bmay stop the generation of the reference braking force on the rear wheel4at a time point later than the time point, which is estimated as the time point at which the rear wheel4starts being driven, by third reference duration. For example, the third reference duration is set to such duration that the state where the reference braking force is generated on the rear wheel4can promptly be canceled after the time point at which the rear wheel4starts being driven elapses. <Effects of Controller> A description will be made on effects of the controller60according to the embodiment of the present invention. During the adaptive cruise control, when the state where the braking force is generated on at least one of the wheels of the motorcycle100is switched to the state where the rear wheel4as the drive wheel is driven using the drive power output from the engine5as the drive source of the motorcycle100, the control section62of the controller60controls the braking force generated on the at least one of the wheels such that the reference braking force is generated on the rear wheel4at the time point at which the rear wheel4starts being driven due to the transmission of the drive power output from the engine5to the rear wheel4. In this way, it is possible to alleviate the shock that occurs due to the transmission of the drive power at the time point of starting driving of the rear wheel4. Thus, the driver's comfort can be secured during the adaptive cruise control for the motorcycle100. Preferably, in the controller60, the control section62controls the reference braking force on the basis of the gear ratio of the transmission mechanism6in the motorcycle100. Here, at the time point at which the rear wheel4starts being driven, a magnitude of the drive power transmitted to the rear wheel4varies according to the gear ratio of the transmission mechanism6. More specifically, in the case where the constant drive power is output from the engine5, the magnitude of the drive power is increased with the higher gear ratio. Accordingly, by controlling the reference braking force on the basis of the gear ratio of the transmission mechanism6in the motorcycle100, the reference braking force can appropriately be controlled according to the magnitude of the drive power that is transmitted to the rear wheel4at the time point at which the rear wheel4starts being driven. Thus, it is possible to further appropriately alleviate the shock that occurs at the time point at which the rear wheel4starts being driven. Preferably, in the controller60, the control section62controls the reference braking force on the basis of the drive power output from the engine5. Here, at the time point at which the rear wheel4starts being driven, the magnitude of the drive power transmitted to the rear wheel4varies according to the drive power output from the engine5. Accordingly, by controlling the reference braking force on the basis of the drive power output from the engine5, the reference braking force can appropriately be controlled according to the magnitude of the drive power that is transmitted to the rear wheel4at the time point at which the rear wheel4starts being driven. Thus, it is possible to further appropriately alleviate the shock that occurs at the time point at which the rear wheel4starts being driven. Preferably, in the controller60, the control section62causes the continuous generation of the reference braking force on the rear wheel4at the time point, at which it is determined that the engine5starts outputting the drive power, onward, and then stops the generation of the reference braking force on the rear wheel4in the case where it is determined that the rear wheel4starts being driven. In this way, it is possible to improve the reliable generation of the reference braking force on the rear wheel4at the time point at which the rear wheel4starts being driven. Thus, it is possible to further appropriately alleviate the shock that occurs due to the transmission of the drive power at the time point at which the rear wheel4starts being driven. Preferably, in the controller60, the control section62causes the generation of the reference braking force on the rear wheel4at the time point corresponding to the time point at which the engine5starts outputting the drive power. In this way, while the reliable generation of the reference braking force on the rear wheel4at the time point at which the rear wheel4starts being driven is appropriately maintained, deterioration of the acceleration performance of the motorcycle100, which is caused by the generation of the reference braking force before the time point at which the rear wheel4starts being driven, can be suppressed. Thus, it is possible to appropriately secure the acceleration performance of the motorcycle100while alleviating the shock that occurs due to the transmission of the drive power at the time point at which the rear wheel4starts being driven. Preferably, in the controller60, the control section62estimates the time point at which the rear wheel4starts being driven, and causes the generation of the reference braking force on the rear wheel4at the time point corresponding to the estimated time point. In this way, while the reliable generation of the reference braking force on the rear wheel4at the time point at which the rear wheel4starts being driven is appropriately maintained, the deterioration of the acceleration performance of the motorcycle, which is caused by the generation of the reference braking force before the time point at which the rear wheel4starts being driven, can be suppressed. Thus, it is possible to appropriately secure the acceleration performance of the motorcycle100while alleviating the shock that occurs due to the transmission of the drive power at the time point at which the rear wheel4starts being driven. Preferably, in the controller60, the control section62estimates the time point at which the rear wheel4starts being driven, and stops the generation of the reference braking force on the rear wheel4at the time point corresponding to the estimated time point. In this way, it is possible to promptly cancel the state where the reference braking force is generated on the rear wheel4after the time point at which the rear wheel4starts being driven elapses. Therefore, it is possible to further appropriately secure the acceleration performance of the motorcycle100. The present invention is not limited to each of the embodiments that have been described. For example, all or parts of the embodiments may be combined, or only a part of each of the embodiments may be implemented. REFERENCE SIGNS LIST 1: Trunk2: Handlebar3: Front wheel3a: Rotor4: Rear wheel4a: Rotor5: Engine6: Transmission mechanism10: Brake system11: First brake operation section12: Front-wheel brake mechanism13: Second brake operation section14: Rear-wheel brake mechanism21: Master cylinder22: Reservoir23: Brake caliper24: Wheel cylinder25: Primary channel26: Secondary channel27: Supply channel31: Inlet valve32: Outlet valve33: Accumulator34: Pump35: First valve36: Second valve41: Inter-vehicular distance sensor42: Input device43: Front-wheel rotational frequency sensor44: Rear-wheel rotational frequency sensor45: Torque sensor46: Crank angle sensor47: Gear position sensor48: Master-cylinder pressure sensor49: Wheel-cylinder pressure sensor50: Hydraulic pressure control unit51: Base body60: Controller61: Acquisition section62: Control section62a: Drive control section62b: Brake control section100: Motorcycle | 45,720 |
11858513 | DETAILED DESCRIPTION A host vehicle1comprising a controller2in accordance with an embodiment of the present invention will now be described with reference to the accompanying figures. The controller2is configured to determine a target operational speed band of the host vehicle1in order to improve operating efficiency and/or to reduce journey time. The host vehicle1is a road vehicle, such as an automobile. It will be understood that the controller2may be implemented in other vehicle types, such as a utility vehicle, a sports utility vehicle (SUV), an off-road vehicle, etc. As described herein, the controller2is configured to implement a dynamic programming algorithm for controlling the target operational speed of the host vehicle1as it travels along a route R (illustrated inFIG.3). The control algorithm may be implemented as part of an autonomous control function, for example comprising one or more of the following: Adaptive Cruise Control (ACC), Intelligent Cruise Control (ICC), Green Light Optimized Speed Advisory (GLOSA), and Traffic Jam Assist (TJA). Alternatively, or in addition, the algorithm may be used to coach a driver of the host vehicle1to follow a target operational speed or to remain with a target operational speed band, for example indicating when to lift off an accelerator pedal or when to change gear. The host vehicle1in the present embodiment comprises a Plug-in Hybrid Electric Vehicle (PHEV) having a parallel hybrid system. The host vehicle1comprises an internal combustion engine (ICE)3, a Belt Integrated Starter Generator (BISG)4and an Electric Rear Axle Drive (ERAD)5. A traction battery6is provided for supplying electrical energy to the ERAD5. The traction battery6is a high voltage (HV) battery in the present embodiment. The host vehicle1has a front axle7and a rear axle8. The ICE3and the BISG4are configured selectively to output a traction torque to the front axle7to drive first and second wheels W1, W2. The ERAD5is configured to output a traction torque to the rear axle8to drive third and fourth wheels W3, W4. The ICE3is permanently connected to the BISG4. The ICE3comprises a crankshaft9which is mechanically connected to a torque converter (not shown) which in turn is connected to a multi-speed transmission11. A disconnect clutch10is provided for selectively disconnecting the crankshaft9from the transmission11. As described herein, a torque demand Twh,drvis generated by an autonomous or semi-autonomous vehicle control system. The parallel hybrid system is operable in a plurality of hybrid powertrain modes to deliver the torque demand Twh,drv. The hybrid powertrain modes comprise selectively operating one or more of the ICE3, the BISG4and the ERAD5to deliver the torque demand Twh,drv. The ERAD5may output a positive traction torque Twh,eradto propel the host vehicle1; or a negative traction torque Twh,eradto regenerate energy for recharging the traction battery6. The BISG4may output a positive traction torque Twh,misgto provide a torque assist for the ICE3; or may output a negative traction torque to perform torque charging of the traction battery6. When referring to power, torque, and speed signals, the subscript wh is used herein to indicate the wheel frame of reference; and the subscript wh is omitted to denote an actuator frame of reference. It will be understood that the controller2may be implemented in other drivetrain configurations, for example the ERAD5may be omitted. Alternatively, the controller2could be used in an Electric Vehicle (EV) which does not include an internal combustion engine. Controller Architecture & Data Processing Functions As illustrated inFIG.2, the controller2comprises processing means in the form of a processor12. The processor12is connected memory means in the form of a system memory13. A set of computational instructions is stored on the system memory13and, when executed, the computational instructions cause the processor12to perform the method(s) described herein. The processor12is configured to receive a first electrical input signal SIN1from a transceiver14. The transceiver14is configured to communicate with one or more target vehicle15-n(where the suffix n differentiates between different target vehicles) proximal to the host vehicle1or along the route R of the host vehicle1; this form of communication is referred to herein as Vehicle-2-Vehicle (V2V) communication. Alternatively, or in addition, the transceiver14is configured to communicate with infrastructure, such as one or more traffic control signals18-n(where the suffix n differentiates between different traffic control signals); this form of communication is referred to herein as Vehicle-2-Infrastructure (V2I) communication. The V2V and V2I communication are collectively referred to as V2X communication. The processor12is configured to receive a second electrical input signal SIN2from at least one vehicle sensor16provided on-board the host vehicle1. The at least one vehicle sensor16in the present embodiment comprises a forward-looking radar16provided on the host vehicle1. The processor12is configured to receive a third electrical input signal SIN3from a navigation system17to determine a geospatial location of the host vehicle1. The processor12may implement a route planning function to determine the route R, for example to plan the route from a current position of the host vehicle1to a user-specified destination. The processor12may access geographic map data stored on the system memory13to implement the route planning function. The geographic map data may, for example, comprise a road network. Alternatively, the route planning may be performed by a separate control unit, for example integrated into the navigation system17. The target vehicle15-nmay hinder or impede progress of the host vehicle1depending on where the host vehicle1encounters the target vehicle15-n. The host vehicle1may be hindered if the target vehicle15-nis encountered on a section of the current route R which is unfavourable for performing an overtaking manoeuvre, but may continue substantially unhindered if the target vehicle15-nis encountered on a section of the current route R which is favourable for performing an overtaking manoeuvre, for example a section of road or highway having multiple lanes. The location where the host vehicle1encounters the target vehicle15-nis a function of time and the relative speed of the host vehicle1and the target vehicle15-n. The traffic control signals18-nmay hinder or impede progress of the host vehicle1depending on the time when the host vehicle1arrives at the traffic control signals18-n. The host vehicle1may be hindered by the traffic control signals18-nif the host vehicle1arrives at the traffic control signals during a red phase (i.e. when traffic is prohibited from proceeding). The host vehicle1may continue substantially unhindered if the host vehicle1arrives at the traffic control signals18-nduring a green phase (i.e. when traffic is allowed to proceed). favourable for performing. Thus, the target vehicle15-nand the traffic control signals18-2are referred to herein as time-dependent obstacles. Other time-dependent obstacles include, for example, a pedestrian crossing or a level-crossing. Vehicle Modeling The dynamic programming algorithm uses a backward-facing quasi-static longitudinal vehicle model for the optimization of the vehicle speed trajectory and the powertrain state. This model is now described in more detail. Vehicle Longitudinal Dynamics In quasi-static simulations, the input variables are the vehicle speed Vveh, the vehicle acceleration, aveh, and the road gradient angle θroad. The input variables are assumed to be constant for a short discretization step, Δt. The tractive force Fdrvrequired to drive the vehicle for a given profile, is calculated by Newton's 2ndlaw, expressed as: Fdrv=Mvehaveh+Fr+Fa+Fg(1) where mveh. In denotes the vehicle inertia mass of the vehicle including all rotational inertias. Friction force Fr, the aerodynamic drag force Fa, and the gravitation force induced by the road gradient Fgare expressed in the following equations: Fr=crmvehgcos(θroad) (2) Fa=0.5ρaAfcdVveh2(3) Fg=mvehgsin(θroad) (4) where cris the rolling friction coefficient, g is the gravitational acceleration, ρais the density of air, Afis the vehicle's frontal area, and cdis the aerodynamic drag coefficient. The vehicle combined wheel torque is then calculated by the following equation: Twh,drv=Fdrvrwh(5) where rwhis the wheel radius. Assuming no wheel slip and that the rotational speed for all wheels is equal, the wheel speed is given by: ωwh=Fdrv/rwh(6) The tractive torque is distributed across the front and rear axles7,8according to the control input u1∈[0,1], respectively as follows: Twh,drv,rr=Twh,drvu1(7) Twh,drv,fr=Twh,drv(1−u1) (8) where Twh,drv,rrand Twh,drv,frdenote the tractive torque at the rear axle8and the front axle7, respectively. Rear Axle Model The torque Teradat the output shaft of the ERAD5is calculated from the corresponding torque at the wheel, Twh,drv,rrafter considering all lumped driveline losses ηgb,eradand transmission ratio vgb,erad: Terad={Twh,drv,rrηgb,eradvgb,erad,Twh,drv,rr<0Twh,drv,rrηgb,eradvgb,erad,Twh,drv,rr≥0(9) Similarly, the rotational speed ωeradat the output shaft of the ERAD5is expressed as: ωerad=ωwhvgb,erad(10) The electrical power of the ERAD5, including lamped power losses of the motor and inverter are formulated in look-up maps of the form: Pele,erad=floss,erad(Terad,ωerad,Vbatt) (11) where Vbattis the voltage of the traction battery6. Front Axle Model Similarly to the rear axle, and assuming that the torque converter is in a locked up state, the torque converter input torque Tcrnkis given by: Tcrnk={Twh,drv,frηgb,frvgb,fr(κgr),Twh,drv,rr<0Twh,drv,frηgb,frvgb,fr(κgr),Twh,drv,rr≥0(12) where ηgb,fris the efficiency of the front axle transmission and driveline including the losses of the torque converter. The gear ratio vgb,fris a function of the gear κgrwhich is determined by a gear shifting strategy: κgr=fgr(ωwh,Twh,drv,fr) (13) Alternatively, the gear κgrcould be optimized as part of the powertrain control. The input rotational speed ωcrnkof the torque converter is given by: ωcrnk=ωwhvgb,fr(14) Given a transmission ratio vbisgof the BISG4, the engine torque Tengis expressed as: Teng=Tcrnk=vbisgTbisg(15) where the BISG torque Tbisgis expressed as a function of the optimization variable u2∈[−2,1]: Tbisg=u2Tcrnk(16) The instantaneous fuel flow {dot over (m)}fof the ICE3can be obtained by a steady state map which is expressed as: {dot over (m)}f=ff(Teng,ωcrnk) (17) A fully warm engine is assumed allowing the dependency to ICE coolant/oil temperatures to be dropped. Similar to the ERAD5, the electrical power of the BISG4is formulated in look-up maps of the form: Pele,bisg=fele,bisg(Tbisg,ωcrnk,Vbatt) (18) Traction Battery Model The dynamics of the traction battery6are considered and modelled as an equivalent circuit consisting of ncellbattery cells connected in series. Each cell circuit consists of a resistance and a voltage source. Assuming the same charge and temperature, Thv,battacross the battery pack, the total resistance and open voltage source are, given by the following equations: Rhv,batt=fR(SOC,Thv,batt) (19) Voc,batt=fv(SOC,Thv,batt) (20) The rate of change of SOC is expressed by: SOC.=Voc,batt-Voc,batt2-4PbattRhv,batt2Rhv,battQhv,batt(21) where Qhv,battis the HV battery capacity and Pbattis the HV battery net power. The net traction battery power is expressed in terms of the required sum of electrical consumers: Pbatt=Pele,erad+Pele,bisg+Pdcdc(22) where Pdcdcrepresents all auxiliary power requests, i.e. DCDC and air-conditioning. Vehicle model Constraints and Validation The model accounts for system constraints so as to disregard any infeasible solutions. The constraints considered in this work include the following:ICE steady-state torque limitsERAD and BISG torque limitsDriveline torque limitsTraction battery power and SOC limits One or more of these constraints may be used when determining acceleration limits of the host vehicle1. Predictive Control Algorithm Optimal Control Theory & Dynamic Programming A brief summary of the relevant control theory and mathematical formulation of dynamic programming now follows. Optimal Control Problem Formulation The optimal control problem can be described by first defining a discrete dynamic system xk+1with n states xk, m inputs uk, and l exogenous inputs ωk. This can be stated as follows: find an admissible control policy π={μ0(x), μ1(x), . . . , μN-1(x)} for k=0,1, . . . , N−1 such that the cost function (Equation 23) is minimized and the constraints (Equations 25 to 29) are satisfied. minuk∈Uk{gN(xN)+Σ0N-1gk(xk,uk,ωk)}(23) xk+1=fk(xk,uk,ωk) (24) xk∈Xk⊆Rn(25) x0=xIC(26) xN∈T⊆Rn(27) uk∈Uk⊆Rm(28) wk∈Wk⊆R1(29) ∀k=0,1, . . . ,N−1 (30) The function gN(xN) is the terminal cost term and the term gk(xk,uk,ωk) is the stage cost, i.e. the cost associated with applying the control action uk, at a discrete time (or distance) k to the discrete time dynamic system (Equation 24). The notation for the functions fk, gkindicates that both the cost term and the dynamic system can be time-varying. The initial condition is set to xICand the state at the last iteration is constrained within the set T. The state variables, control inputs and exogenous inputs are constrained into time-variant sets Xk, Uk, and Wk, respectively. Dynamic Programming The dynamic programming algorithm is an optimization method which identifies a global optimal solution given a problem formulation and constraints. The dynamic programming algorithm is based on what is referred as Bellman's “Principle of Optimality” to simplify a complex problem by breaking it down to smaller chunks recursively, without sacrificing optimality 25 (Bellman, R., “Dynamic programming,” (Courier Corporation, 2013)). Typically, the dynamic programming algorithm is used to determine the optimal controller which is not causal as to produce a benchmark for any other causal controller. On the assumption that all the future disturbances and reference inputs are known at the onset of computation, the controller2could be used in real-time control applications. For the optimal control problem to be solved numerically, the time (or distance), the state space, and the control space need to be discretized. At index k, the state space is discretized to the set Xk={xk1, xk2, . . . , xkp}, where the superscript denotes the grid point at a given index k, with p indicating the number of grid points at x index. Similarly, the control space set is defined as Uk={uk1, . . . , ukp}. The dynamic programming algorithm proceeds backwards in time (or distance) from N−1 to 0 to evaluate the optimal cost-to-go function Jk(xi) at every grid point in the discretized distance (or time) space:1. End cost calculation step: JN(xi)={gN(xi),forxi∈T∞,else(31)2. Intermediate calculation step: Jk(xi)=minuk∈Uk{gk(xi·uk)+Jk+1(fk(xi,uk,ωk))}(32) The control policy π={u0(x), μ1(x), . . . , μN-1(x)} is optimal if it consists of the optimal control signal at each node which minimizes the right side of this equation. Multilinear interpolation is used to evaluate the cost to go function when the control policy falls between grid points. The communication between the host vehicle1and the surrounding infrastructure and target vehicles15-nis illustrated inFIG.3. The controller2in the preset embodiment is scalable depending on the level of information available and is operable when V2V and/or V2I communication is unavailable. The controller2communicates with infrastructure, such as the traffic control signals15-non the route R. The controller2also communicates with one or more target vehicle15to assess the traffic surrounding the host vehicle1, for example the traffic ahead of the host vehicle1on the route R. The controller2may optionally communicate with a remote server, for example over a wireless communication network, to extend the horizon. The communication with the remote server may identify a road incident on the route R and/or provide real-time traffic information on the route R. The acceleration of the host vehicle1is controlled in response to road attributes, such as road curvature, changes in altitude, altitude, intersections and traffic control signals; and/or changes in driving conditions, such as speed limits and traffic/congestion. The controller2may also be configured to take account of additional factors. For example, the selection of a low cruising speed may improve operating efficiency but result in an unacceptable increase in the journey time. The journey time to energy usage trade-off, is typically nonlinear and will depend on the specific driving scenario. A schematic representation of the architecture of the controller2is shown inFIG.4. The functions are categorized as: (i) predictive control algorithms; (ii) supporting functions that combine information from various sources; and (iii) vehicle on-board controllers which supply the necessary information. The predictive control algorithms comprise a vehicle speed control unit20and a hybrid powertrain control unit21. The supporting functions comprise a route-based predictive optimizer22, a static speed constraints calculator23, an energy recuperation estimator24, a target vehicle speed trajectory predictor25, an auxiliary load estimator26and a road-load estimator27. The vehicle on-board controllers comprise a route preview calculator28, such as an eHorizon module available from Continental AG; a powertrain control module29and a V2I communication module30. The route-based predictive optimizer22is used to pre-emptively determine a State of Charge (SOC) of the traction battery6throughout a journey. The route-based predictive optimizer22may, for example, takes into consideration trip information such as one or more of the following: road speed limits, historical aggregated vehicle speed, road gradient, road type and the available energy from the traction battery6. The road-load estimator27is provided to estimate the traction torque requirement. The auxiliary load estimator26may, for example, predict DC-DC converter and HVAC demand. The energy recuperation estimator24estimates energy recuperation. The hybrid powertrain control unit21is configured to constantly adapts the level of deceleration of the host vehicle1in dependence on the determined SOC and the future potential for energy recuperation. Static Velocity Limit Determination With reference toFIG.5, the static speed constraints calculator23receives the following signals from the route preview calculator28in the form of a receding horizon:A road gradient array θroad,vec=[dθroad,horθroad,hor] comprising the road grade angle vector θroad,horat the corresponding distance vector dθroad,hor.A road curvature array [dφroad,horφroad,hor] comprising the road curvature angle vector θroad,horat the corresponding distance vector φroad,hor.A speed limit array [θvlim,horVlim,hor] comprising the road speed limit vector Vlim, horat the corresponding distance vector dvlim,hor. The static speed constraints calculator23comprises a maximum speed road curvature module31, a maximum speed limit arbitration module32, a longitudinal acceleration look-up module33, a lateral acceleration look-up module34and a speed constraint smoothing module35. The speed constraint smoothing module35receives the outputs from the route preview calculator28, the speed limit arbitration module32, the longitudinal acceleration look-up module33and the lateral acceleration look-up module34and generates maximum and minimum speed constraints Vlim,max,hor, Vlim,min,hor. The speed constraint smoothing module35outputs the following arrays:A minimum speed array [dVlim,min,horVlim,min,hor] comprising the minimum speed constraint Vlim,min,horat the corresponding distance vector dVlim,min,hor.A maximum speed array [dV lim,max,horVlim,max,hor] comprising the maximum speed constraint Vlim,max,horat the corresponding distance vector dVlim,max,hor. The maximum speed road curvature module31calculates a speed limit due to lateral acceleration exerted on the host vehicle1when travelling around a bend in a road (the curvature of the bend being determined with reference to geographical map data). The maximum speed constraint is identified as the smaller of the speed determined by the maximum speed limit arbitration module32and the maximum speed road curvature module31. Subsequently, the speed constraint smoothing module35smooths the maximum speed limit according to lateral and longitudinal acceleration target look-up tables defined by the longitudinal and lateral acceleration look-up modules33,34. The smoothing module may also consider the road gradient. The minimum speed constraint may be derived as a percentage of the equivalent maximum speed constraint, but also considering the actual vehicle speed (so that optimization is not constrained into an infeasible region). Alternatively, or in addition, functional road class or road type can also be considered in the determination of the lowest speed. As shown inFIG.4, the minimum speed array [dVlim,min,horVlim,min,hor] and the maximum speed array [dVlim,max,horVlim,max,hor] are output to the vehicle speed control unit20. Leading Vehicle Velocity Trajectory Prediction The target vehicle speed trajectory predictor25is configured to predict the speed of any target vehicles15-nin the vicinity of the host vehicle1, particularly any target vehicles15-nwhich may hinder or obstruct the motion of the host vehicle1along the route R. The current state of the target vehicles15-nis transmitted to the host vehicle1as part of the V2V communication. Typical information available from the V2V communication includes the current speed, acceleration and location of the target vehicle15-n. A rule-based model for predicting the speed of a target vehicle15-nis illustrated inFIG.6. The prediction is conducted for each target vehicle15-n, one target vehicle15-nat a time, within an optimization horizon of the host vehicle1along the route R. The optimization horizon may consist of a sub-section of the route R, the optimization horizon continually changing as the host vehicle1progresses along the route R (providing a rolling horizon). The optimization horizon may, for example, comprise a sub-section of the route R having a length greater than or equal to 250 m, 500 m, 750, or 1,000 m. Alternatively, the optimization horizon may consist of the entire route R. The algorithm starts with the target vehicle15-nthe furthest away from the host vehicle1and progresses with target vehicle15-ncloser to the host vehicle1. The prediction of the speed of each target vehicle15-nassumes that an initial rate of acceleration or deceleration continues for a predetermined period of time. A model for predicting the speed of a first target vehicle15-1is shown inFIG.6. The model is initiated (BLOCK100). In the case of an initial deceleration, the prediction assumes the first target vehicle15-1will continue to decelerate at a constant rate of deceleration for a predetermined period of time (BLOCK105). The predetermined period of time for deceleration of the first target vehicle15-1is calibratable. A more accurate prediction may be achieved by assuming that the deceleration will end after a few seconds, rather than assuming that the vehicle will continue to decelerate until it comes to a standstill. After an initial period of deceleration, the first target vehicle15-1may be assumed to start accelerating again. If the first target vehicle15-1is determined to have stopped, an assumption is made that it will remain stationary for a predetermined period of time and will then accelerate at a predetermined acceleration. The period of time that the first target vehicle15-1remains stationary is calibratable, as too is the acceleration. In the case of an initial acceleration, the prediction assumes that the first target vehicle15-1will continue to accelerate for a predetermined period of time (BLOCK110). The predetermined period of time for acceleration of the first target vehicle15-1is calibratable. The controller2may change the predicted movement of the host vehicle1in response to a change in the environment. For example, the prediction model would transition from assuming the continued acceleration of the first target vehicle15-1if the first target vehicle15-1is identified as following a second target vehicle15-2, i.e. a preceding vehicle (BLOCK115). This change may, for example, be implemented upon determining that the first target vehicle15-1is within a predefined distance of the second target vehicle15-2. The prediction model assumes that the first target vehicle15-1will subsequently attempt to keep a certain headway between the first target vehicle15-1and the second target vehicle15-2. The model may define a target headway distance between the first target vehicle15-1and the second target vehicle15-2. Another possibility is that the initial acceleration or deceleration results in the speed of the first target vehicle15-1being substantially equal to a determined speed limit (either a legal speed limit or a speed limit determined by road curvature) in which case the prediction would assume that the first target vehicle15-1would proceed at the speed limit (BLOCK120). If the first target vehicle15-1gets close to the second target vehicle15-2, the predicted speed of the first target vehicle15-1is reduced. Note that there is no possible transition from the first target vehicle15-1following the second target vehicle15-2to following the speed limits. This is because the prediction would never predict any of the target vehicles15-nas exceeding the speed limit (at least for longer than momentarily e.g. when speed limit is decreasing), nor would it try to predict any overtaking. Consequently, a large gap between the first target vehicle15-1and the second target vehicle15-2cannot develop and thus this transition is not necessary. The prediction model could be modified also to take into account infrastructure, for example traffic control signals. It will be understood that the prediction model may be updated cyclically when new information is available regarding the position and/or movements of the target vehicles15-n. The prediction model predicts the speed and movement of each of the identified target vehicles15-n. Other techniques for modelling the speed and movement of the target vehicles15-nmay be used. Optimization Algorithm Formulation Problem Formulation and Decomposition The determination of an appropriate vehicle speed trajectory and the control of the vehicle propulsion system is dependent on a plurality of states/inputs and time-variant, nonlinear system dynamics. The controller2utilises the following states and inputs: State:x=[t VvehSOC κgr] (33) Control input:u=[avehu1u2u3] (34) Ex. input: ωk=[Edcdc,estθroad,vecTiceTHVcFrl] (35) where t, Edcdc,est, Tice, THV, and cFrldenote time, an estimate of auxiliary energy consumption, ICE coolant temperature, HV battery temperature and the road-load force coefficients (at zero road gradient), respectively within the optimization horizon. The front axle transmission input u3is defined as: u3={1,gearupshift0,gearhold-1,geardownshift(36) A level of approximation is appropriate. One option would be to linearize the system (as defined in Equation 24) and make various approximations to the constraints (as defined in Equations 25 to 29). This option is not implemented in the present embodiment. An alternative would be to sacrifice a level of optimality, using an approximate Nonlinear Model Predictive Controller algorithm (NMPC). As outlined above, the dynamic programming algorithm has been implemented in the present embodiment. In order to reduce the computational burden of the dynamic programming algorithm, the optimization problem is decomposed into two stages, as represented by the vehicle speed control unit20and the hybrid powertrain control unit21inFIG.4. The vehicle speed control unit20and the hybrid powertrain control unit21will now be described in more detail. Vehicle Speed Optimization The vehicle speed control unit20receives the following:Static speed constraints from function: Static Speed Constraints Calculation.Time-varying speed constraints from function: Surrounding Vehicle Speed Trajectory Predictor.Time-repeatable speed constraints from the V2I communication channel. This includes the green and red phasing of the traffic control signals.The current vehicle speed (as initial condition) and SOC.The array θroad,vecand the road-load force coefficients cFrl. Setting aside the time variant derived from the V2X communication, the dynamic programming problem cost can be defined as follows: gk(xk,uk,ωk)=Wtimet+Waccaveh2+WroadFroad(37) with State:xk=Vveh(38) Control input:uk=aveh(39) Exogenous input: ωk=[θroad,veccFrl] (40) where Wtime, Wacc, and Wroadare the cost weights for time term t, the square acceleration term aveh2, and the road-load force term Froad=Fr+Fa+Fg, respectively. A distance-based grid is adopted. The Wtime, Wacc, Wroadreflect the relative importance of each term. The acceleration term is used to avoid aggressive (de)accelerations. Wacccould be a function of SOC, i.e. Wacc=f (SOC) and predicted recuperation energy, to adapt the level of acceleration or deceleration according to the powertrain state. The time term is used to discourage input mode transitions which add significant time to the journey. Finally, the road-load term discourages excessively high vehicle cruising, as aerodynamic losses exponentially increase with vehicle speed. The time during a mode transition is calculated starting from the following equations: 12aveh,DstDs2+vstart,DstDs=Ds(41)vstart,Ds=vend,Ds-aveh,DstDs(42) where aveh,Dsis acceleration used during distance step Ds, tDsis the time spent to cover distance step Ds, vstart,Dsis the vehicle speed at the beginning of the distance step, and vend,Dsthe vehicle speed at the end of the distance step. The vehicle speed at the end of the distance step vend,Dsand the acceleration during the distance step aveh,Dsare known based on the model inputs. The distance step Dsis also known based on the optimization problem definition. The vehicle speed at the beginning of the distance step vstart,Ds(model output state in forward dynamic programming) and the time to cover the distance step tDscan be calculated. The time spent to cover the distance step tDsis required for time-keeping and cost calculation. The vehicle speed at the beginning of the distance step vstart,Dsis obtained directly from Equation (42); and the time to cover the distance step tDscan be obtained by substituting Equation (42) into Equation (41). This results in the following second order polynomial equation: -12aveh,DstDs2+vend,tDstDs-Ds=0(43) The solution of this quadratic equation is given by: tDs=vend,Ds±vend,Ds2-2aveh,DsDsaveh,Ds(44) From the two resulting roots of the equation, the smaller positive non-complex root is selected. The V2X speed optimization constraints, such as the target vehicles15-n, the traffic control signals18-nand other traffic objects, are time-dependent obstacle. A comparison of the static vehicle speed constraint against the speed constraint due to a traffic control signal is represented in a two-dimensional (2D) optimization grid40shown inFIG.7A; and a corresponding three-dimensional (3D) optimization grid41shown inFIG.7B. The two-dimensional (2D) optimization grid40consists of a two-dimensional speed against distance map. The three-dimensional (3D) optimization grid41consists of a three-dimensional speed, distance and time map. In the illustrated example, a traffic control signals18-nare located at a position dk=400 m, ahead of a current vehicle position d0=0 m. The traffic control signals18-nimpose a speed constraint during a red phase when the progress of the host vehicle1would be impeded. The traffic control signals18-ndo not impose a speed constraint during a green phase when the progress of the host vehicle1would be at least substantially unhindered. The operating state of the traffic control signals18-nis represented in the 3D optimization grid41by a square wave42having a non-zero value during the red phase and a zero value during the green phase. In the illustrated arrangement, the red phase of the traffic control signals18-nhas a duration of 20 to 40 seconds. A static speed constraint due to road topology/speed limits is represented by a first continuous line43in the 2D optimization grid40, and by a continuous surface44within the 3D optimization grid41. The host vehicle1is travelling with an initial speed Vveh,0=100 km/h. The grid points X0, Xk−1, Xk, and XNrepresent analysis planes at distance d0=0 m, dk−1, dk, and dN=500 m, respectively. In the 2D optimization grid40, potential first and second trajectories45,46from point A to point B are shown. The first and second speed trajectories45,46are also shown in the 3-D optimization grid41. The first and second speed trajectories45,46end at points B1, B2respectively in the 3-D optimization grid41. The first trajectory45is valid since it results in the host vehicle1traversing the location of the traffic control signals18-nduring a green phase. The second trajectory46is invalid since it results in the host vehicle1traversing the location of the traffic control signals18-nduring a red phase. Thus, only the first trajectory45(extending from A to B2) is feasible with regards to traffic control signal constraints. If the analysis is performed within a 2-D plane, the time-varying speed constraints would only be considered during the transition from Xk−1, to Xkin a forward dynamic programming optimization. Considering the time-varying speed constraint at dk, the already calculated optimized trajectories, from any grid point of X0to Xk−1in the 2-D optimization grid40may no longer be optimal, or may be infeasible during the transition from Xk−1to Xk. To overcome this problem, the optimization space could be increased also to include time as an optimization state. However, this exponentially increases the possible transitions from grid plane X0to Xkand significantly increases computational burden. An alternative problem formulation may utilise an approximation that avoids the need to add time as an optimization state. A separate cost function is added to the dynamic programming algorithm to penalize control actions that are likely to have undesired time trajectories. A level of optimality is potentially sacrificed using this approach, but it is believed that any such loss is acceptable, for example compared to uncertainty arising from traffic flow predictions. Conversely, the proposed solution further discourages frequent fluctuations in vehicle speed that may otherwise have been selected. By way of example, cost considerations can be added for traffic control signals18-nand target vehicles15-n. The dynamic programming algorithm is calculated forwards, i.e. from the current time at the beginning of algorithm execution, to determine whether or not time-variant constraints, such as the traffic control signal18-n, will be violated (i.e. whether or not one or more traffic control signal18-non the route R will impede progress of the host vehicle1). The operation of the controller2will now be described in relation to a scenario illustrated inFIG.8in which the host vehicle1is approaching a first traffic control signal18-1. A two-dimensional optimization grid50is generated consisting of a two dimensional speed against distance map. A first traffic control signal18-nis identified at a first location k on the route R. First and second acceleration limits for the host vehicle1are calculated to arrive at the first location K during a time period corresponding to a green phase of the first traffic control signal18-1(represented by a double-headed arrow l inFIG.8). The first acceleration limit aOVcorresponds to the host vehicle1arriving at the first traffic control signal18-1concurrent with the beginning of the green phase l, i.e. as the first traffic control signal18-1turns green (aTLl,green). The first acceleration limit aTLcorresponds to a constant acceleration or deceleration that would cause the host vehicle1to arrive at the first location K at a first arrival time corresponding to a time when the first traffic control signal18-1enters a first green phase. The second acceleration limit corresponds to the host vehicle1arriving at the first traffic control signal18-1contemporaneous with the end of the green phase l, i.e. as the first traffic control signal18-1turns red (aTLl,red). The second acceleration limit aTLcorresponds to a constant acceleration or deceleration that would cause the host vehicle1to arrive at the first location K at a second arrival time corresponding to a time when the first traffic control signal18-1exits the first green phase. The first and second acceleration limits aTLare calculated for the host vehicle1in respect of each grid point in the 2-D optimization grid50between the current position of the host vehicle1and the traffic control signal18-n. In the example illustrated inFIG.8, the first and second acceleration limits aTLare calculated at the grid points x04(corresponding to point A) and grid point x143; the first and second speed trajectories51,52for each these grid points is illustrated. The first and second acceleration limits define upper and lower speed trajectories51,52for the host vehicle1. The upper and lower speed trajectories51,52define a target operational speed band53. A cost is applied for any acceleration transitions that violate the first and second acceleration limits (i.e. a cost is applied if the actual acceleration of the host vehicle1is outside the range defined by the first and second acceleration limits). Consequently, the traffic control signal cost could prompt the dynamic programming algorithm to favour lower speed trajectories, for example when a higher speed trajectory will result in the host vehicle1arriving at the first traffic control signal18-1during a red phase which would necessitate the host vehicle1stopping. It will be appreciated that the operation of the first traffic control signal18-1is cyclical and the green and red phases alternate, thereby providing a plurality of opportunities for the host vehicle1to pass the first traffic control signal18-1during a green phase. By way of example, first and second green phases l1, l2are shown in the two-dimensional optimization grid50. The first and second acceleration limits aTLmay be calculated for a plurality of red phases and/or green phases. The calculation of the acceleration limit aTLwill now be described by way of example. The speed, distance and time are known for a grid point A. At a grid point C corresponding to the traffic control signal18-1transitioning to the green phase, the distance and time are known, but the speed of the host vehicle1is not known. By way of example, at the grid point A the first speed (Sp1) of the host vehicle1is 100 kph (27.8 m/s), the first distance (d1) is zero (0) metres and the first time (time1) is zero (0). At the grid point C, the second speed (Sp2) of the host vehicle1is unknown, the second distance (d2) is 400 metres and the second time (time2) is 20 seconds. The following kinematic equations can be used to determine the acceleration limit aTLand the second speed (Sp2): Sp2=SP1+alim×(time2−time1) d2−d1=(time2−time1)*(Sp2+Sp1)/2 One of the equations is solved for one unknown (either the acceleration limit aTLor the second speed (Sp2)) and the result substituted in the other equation. In the arrangement illustrated inFIG.8, a lower static optimisation limit is set (20 kph in the present example), but this is not essential. As the host vehicle1moves forward along the route R, the cost function is evaluated for each possible acceleration action. Each action is associated with a time interval (see equation 44). By adding the time interval for a given speed trajectory, the processor12determines whether each speed trajectory will result in the host vehicle1passing through the traffic control signal18-1during a green phase or a red phase. If the speed trajectory remains within the operational speed band53defined by the upper and lower speed trajectories51,52, the host vehicle1will pass through the traffic control signal18-1during a green phase. It will be understood that there may be a single traffic control signals18-non the route R, or there may be a plurality of traffic control signals18-non the route R. The following control strategy is used to determine the cost related to the one or more traffic control signal18-nlocated within the optimization horizon on the route R:1. For each traffic control signal18-nwithin the optimization horizon and its each green phase compute aTLl,greenand aTLl,red.2. For each transition that does not fall within a green phase, compute violation to closest limit aviol=|aveh-aTLlim|.3. Add cost to these violating transitions gTL=aviolWTL1arem,TLWTL2(1+MTLWTL3). The cost function gTLincreases the associated cost as the violation aviolincreases, the cost decreases as the distance remaining to the traffic control signal drem,TLincreases and as the number of traffic control signals between the currently considered traffic control signal and the host vehicle1MTLincreases. The weightings of these different considerations can be tuned with the following coefficients: WTL1∈(0, ∞), WTL2∈[0,∞) and WTL3∈[0,∞). At least in certain embodiments, the cost related to each traffic control signal18-ndecreases as the distance between the host vehicle1and the traffic control signal18-nincreases. A similar cost function can be applied with regards a target vehicle15-n, for example to calculate a target speed trajectory band for the host vehicle that avoids approaching a target vehicle15-nin front of the host vehicle1with a large speed difference. The progress of the target vehicle15-nalong the route R is predicted, for example utilising the model described herein with reference toFIG.6. The application of a suitable cost generates a speed profile that results in the host vehicle1gradually reducing speed to maintain a target headway between the host vehicle and the target vehicle15-n. At least in certain embodiments, a gradual approach to the target vehicle15-nmay be more energy efficient as it allows increased coasting and mitigates the need to use friction brakes. This can be done by first making a prediction of the speed of the target vehicle(s)15-nwithin the optimization horizon. This prediction is used in a cost function to compute how each transition would affect the headway between the host vehicle1and the target vehicle15-n. The target vehicle15-nis identified at a first location K and the speed of the target vehicle15-ndetermined at the first location K. The target vehicle15-nin the present embodiment is assumed to be travelling at a constant speed for the purposes of predicting its movement along the route R. A first acceleration limit aOVis calculated for the host vehicle1. It will be understood that other techniques may be implemented to model movement of the target vehicle15-n, for example comprising acceleration/deceleration and/or local infrastructure, such as traffic control signals. The first acceleration limit aOVdefines a constant acceleration or deceleration for the host vehicle1which will result in the host vehicle1arriving at the first location K at a first arrival time with a vehicle speed which is substantially equal to or less than the speed of the target vehicle15-nat the first location K. The first arrival time is selected to provide a target headway between the host vehicle1and the target vehicle15-nwhen the host vehicle1arrives at the first location K. The first acceleration limit is calculated for the host vehicle1in respect of each grid point in a 2-D optimization grid. The acceleration limit is used to determine a target operational speed band for the host vehicle1. If the first acceleration limit is violated (i.e. the actual acceleration of the host vehicle1differs from the first acceleration limit), a cost is applied to the optimized speed profile. In the present embodiment, if the deceleration of the host vehicle1is less than the acceleration limit aOV, a cost is applied as the host vehicle1will arrive at the first location K with a higher speed than that of the target vehicle15-n). If the acceleration of the host vehicle1is greater than or equal to the acceleration limit aOV, no cost is applied as the host vehicle1will approach the target vehicle15-ngradually with a smaller speed differential, thereby reducing or avoiding harsh or reactive deceleration. A control strategy to determine the cost related to the target vehicle15-n(referred to herein as the target vehicle cost gOV) is as follows:1. Compute an acceleration aOVand distance vector drem,hw,minfor the target vehicle15-n.2. For each transition, compute violation of the acceleration limit aviol=|aveh-aOV|.3. Add target vehicle cost gOV=0 to each violating transition: gov=(1-arem,hw,mi𝔫ahw,pe𝔫,max)WOV1aviol.4. Set the target vehicle cost gOV=0 when distance to minimum headway is large: drem,hw,min>dhw,pen,maxor if aveh<aOV. In this scenario, the target vehicle cost gOVis the cost associated with the target vehicle15-n, drem,hw,minis the distance from the minimum headway to the other vehicle, dhw,pen,maxis the distance beyond which no penalties related to the target vehicle15-nare applied, and WOV1∈[0,∞) is a penalty coefficient determining the overall importance of the target vehicle cost gOV. The target vehicle cost gOVdecreases as the distance between the host vehicle1and the target vehicle15-nincreases. The overall speed optimization cost function gk(xk,uk,ωk) can now be augmented with the traffic control signal cost gTLand the target vehicle cost gOV: gk(xk,uk,ωk)=Wtimet+Waccaveh2+WroadFroad+gTL+gOV(45) The operation of the controller2will now be described in relation to first lead vehicle scenario illustrated inFIG.9. The host vehicle1is approaching from behind a first target vehicle15-1which is travelling along the route R. A two-dimensional optimization grid60is generated consisting of a two dimensional speed against distance map. The two-dimensional optimization grid60relates a distance from the host vehicle1along the route R to the speed of the host vehicle1. The first target vehicle15-1is identified at a first location K on the route R. The progress of the first target vehicle15-1along the route R is predicted assuming at a constant speed (for example determined via V2V communication or using on-board sensors on the host vehicle1). A first acceleration limit aOVis calculated for the host vehicle1to determine a first speed trajectory61for controlling the host vehicle1to arrive at the first location K at a speed which is less than or equal to the determined speed of the first target vehicle15-1. The first acceleration limit aOVcorresponds to a constant acceleration or deceleration that would cause the host vehicle1to arrive at the first location K at a first arrival time with a vehicle speed which is substantially equal to or less than the speed of the target vehicle15-nat the first location K. The first arrival time is selected to provide a target headway62between the host vehicle1and the target vehicle15-nwhen the host vehicle1arrives at the first location K. In the arrangement illustrated inFIG.9, this is implemented by calculating the first acceleration limit aOVfor a position which is offset relative to the first location K y a distance corresponding to a target headway62. The first acceleration limit aOVis calculated at each grid point in the two-dimensional optimization grid60. A cost is applied based on a deviation of the actual acceleration of the host vehicle1from the acceleration limit aOV. The cost is determined in dependence on the magnitude of the deviation. An acceleration of the host vehicle1which is greater than the acceleration limit aOVis penalised. The distance vector drem,hw,mindirectly affects the cost. By way of example, the initial speed of the host vehicle1may be 50 km/h and the first target vehicle15-1may have a constant speed of 20 km/h. At the starting position, the acceleration limit aOVcorresponds to a constant deceleration that would cause the host vehicle1to slow down from 50 km/h to 20 km/h while moving to the initial position of the first target vehicle15-1. As illustrated inFIG.9, the headway62is maintained between the host vehicle1and the first target vehicle15-1as a safety consideration. As speeds of the host vehicle1and the first target vehicle15-1are at least substantially equal to each other, the headway62remains constant. In order to avoid an unnecessarily large headway62, a small violation of the acceleration limit aOVmay be permitted. Thus, no cost may be applied for a deceleration of the host vehicle1which is lower than the acceleration limit aOVwithin a predetermined margin, for example expressed as a proportion of the acceleration limit aOV. The dynamic programming algorithm may flag any trajectory as infeasible which would result in the host vehicle1getting closer to the first target vehicle15-1than a predetermined minimum headway. In this example, the first speed trajectory61defines an upper limit of a target operational speed trajectory band63. The operational speed trajectory band63is the area below the first speed trajectory61shown inFIG.9. Other static constraints could be applied to reduce the target speed trajectory band. The first target vehicle15-1may hinder or impede progress of the host vehicle1depending on the location on the route R where the host vehicle1encounters the first target vehicle15-1. For example, the host vehicle1may be hindered if the first target vehicle15-1is encountered on a section of road which is favourable for performing unfavourable for performing an overtaking manoeuvre, for example a section of road having a single lane or where overtaking is not permitted. Conversely, the host vehicle1may continue substantially unhindered if the first target vehicle15-1is encountered on a section of road which is favourable for performing an overtaking manoeuvre, for example a section of road or highway having multiple lanes. The operation of the controller2will now be described in relation to second lead vehicle scenario illustrated inFIG.10. The controller2is configured to identify an overtaking opportunity64, for example corresponding to a section of the route R favourable for performing an overtaking manoeuvre. As illustrated inFIG.10, the overtaking opportunity64may be defined in the two-dimensional optimization grid60as extending over a predetermined distance relative to the current location of the host vehicle1. The overtaking opportunity64is identified as extending between a first location K1and a second location K2(represented in the two-dimensional optimization grid60as respective first and second distances relative to the current location of the host vehicle1). A first acceleration limit aOVis used to determine a first speed trajectory65to control the host vehicle1to arrive at the first location K1at a first arrival time which is the same as or later than the time that the first target vehicle15-1will arrive at the first location K1. A second acceleration limit aOVis used to determine a second speed trajectory66to control the host vehicle1to arrive at the second location K2at a second arrival time which is the same as or before the time that the first target vehicle15-1will arrive at the second location K2. The first and second acceleration limit aOVdefine a constant acceleration or deceleration for the host vehicle1. The first and second acceleration limits aOVare calculated at each grid point in the two-dimensional optimization grid60. The first and second speed trajectories65,66define a target operational speed band67for the host vehicle1. As shown inFIG.10, the target operational speed band67is bounded by the first and second speed trajectories65,66. A cost is applied for any acceleration transitions that violate the first and second acceleration limits aOV. The traffic control signal cost could prompt the dynamic programming algorithm to apply a bias in favour of a lower speed trajectory, for example when a higher speed trajectory will result in the host vehicle1approaching the first target vehicle15-1before the identified overtaking opportunity64. Conversely, the traffic control signal cost could prompt the dynamic programming algorithm to apply a bias in favour of a higher speed trajectory, for example when a lower speed trajectory will result in the host vehicle1approaching the first target vehicle15-1after the identified overtaking opportunity64. The processor12determines that the target vehicle15-1is not relevant if the overtake opportunity is taken, but continues to monitor the target vehicle15-1if the overtaking opportunity is missed. Rather than an overtaking opportunity64, the route R may comprise an intersection and the processor12may determine that the host vehicle1will encounter the first target vehicle15-1at the intersection. Again, the time that the host vehicle1arrives at and/or exits the intersection may determine whether progress is hindered by the first target vehicle15-1. Hybrid Powertrain Optimization The operation of the hybrid powertrain control unit21will now be described. The hybrid powertrain control unit21receives:The array [dVveh,opt,horVveh,opt,hor] which contains optimized vehicle trajectory Vveh,opt,horat the corresponding distance vector dVveh,opt,hor.The array θroad,vec, the road-load force coefficients cFrl, and an estimate of auxiliary consumer energy Edcdc,est, in order to account for them in the optimization.The current SOC of the traction battery6to set the optimization initial condition.The array [dSOCSOCtarget] which contains the SOC target SOCtargetat the corresponding distance vector dSOC. The cost function for the powertrain hybrid optimization is set as: gk(xk,uk,ωk)=Wf{dot over (m)}f+WSOC|SOCtrgt−SOC|+Wu3|u3| (46) with State:xk=[SOC κgr]T(47) Control input:uk=[u1u2u3]T(48) Ex. input: ωk=[Edcdc,estθroad,vecTiceTHVcFrl] (49) where Wf, WSOC, and Wu3are the cost weights associated with the fuel economy, SOC and gear cost terms. The SOC term is introduced to ensure the SOC is sustained, and for more flexibility could be adaptable to SOC and SOCtrgt, i.e. WSOC=fsoc(SOC,SOCtrgt). SOCtrgtis provided by the route-based predictive optimizer. The gear cost term Wu3is applied to discourage frequent gear shifts. The vehicle speed control unit20determines the target operational speed band67in dependence on the upper and lower speed trajectories65,66. The target operational speed band67is output from the vehicle speed control unit20as a first output signal SOUT1to a Vehicle Motion Controller (VMC)36(shown inFIG.2) responsible for the trajectory planning and control of the host vehicle1. The VMC36arbitrates between various trajectories and compensates for any speed deviations from the target trajectory. The VMC36generates a propulsion torque request suitable for maintaining the host vehicle1within the target operational speed band67. The propulsion torque request is output by the VMC36as a second output signal SOUT2to a Vehicle Supervisory Controller (VSC)37. The VSC37is configured to intervene to ensure safety, for example if it determines that operating within the target operational speed band67could result in a collision, a speed limit violation or a traffic light violation. A closed-loop controller in the VSC37is used to correct for any discrepancies in the demanded torque request. At least in certain embodiments, the techniques described herein for generating a target operational speed band offer particular advantages, including improved energy efficiency. One reason is the reduced number of times that the host vehicle1is required to stop, for example at the traffic control signals18-n. The host vehicle1described herein has PHEV architecture which is capable of high power energy regeneration during a braking phase; however, anticipating a stopping event and starting to decelerate the host vehicle1earlier is more efficient, for example the engine3could be disengaged and stopped. It is also to be noted, that there is a two-path efficiency loss (motor/inverter/transmission/traction battery), from regenerating and then re-using the energy at a later stage. The techniques may offer larger benefits on vehicles having only an ICE, or a Mild Hybrid Electric Vehicle (MHEV), where there is no or limited regeneration capability. Furthermore, the requirement to decelerate the host vehicle1may be anticipated sooner, even when it is deemed appropriate the host vehicle1. The vehicle speed control unit20provides additional benefit as it is better able to adapt the powertrain control strategy anticipatively based on the expected speed profile rather than only knowledge of the current instantaneous vehicle speed. For example, if the target operational speed band contains a deceleration, then the powertrain strategy may for example turn the engine off early because it is determined in advance that the host vehicle1will start decelerating and that no propulsive torque is needed from the ICE3. The increased use of the ERAD as the sole source of propulsion torque (i.e. operating as an EV), also facilitates mild charging of the traction battery6, for example at low driver torque demands which shifts the engine torque to a more efficient point. Dynamic programming is chosen as the optimisation method described herein due to the optimality of its results and its flexibility to be able to handle challenging non-linear problems such as the one considered here. Typically the computational effort required to perform dynamic programming is high, in particular when the number of model states and control inputs increase. The technique(s) described herein reduce model dimensionality, thereby reducing the amount of model evaluations required to optimise the target speed trajectories, as well as concentrating the optimization grid points to areas where accuracy is most needed. Further reductions in computational burden may be achieved by decoupling the speed optimisation from the powertrain usage optimisation. This modular approach may facilitate application of the techniques described herein across different vehicle architectures, including conventional vehicles, different hybrid architectures and electric vehicles. The speed optimisation stage is mostly independent of powertrain usage decisions, and only includes high-level vehicle parameters such as mass. Its main optimisation goals are minimizing trip time, anticipating road events ahead (such as traffic lights) while considering traffic rules such as speed limits as well as drivability constraints such as acceleration limits. The powertrain optimization stage contains a much more detailed model of the specific vehicle architecture and is responsible for deciding the relevant control decisions for that architecture so that the optimized speed profile is followed. For example, for a typical parallel hybrid the control decisions may be the torque split between the engine and the electric machine, as well as the gear selection. While the speed and powertrain optimisation procedures are mostly decoupled, some considerations about the powertrain can still be taken in the speed optimisation of the first stage. For example, depending on the current SOC level the algorithm cost function weights may be adapted to encourage certain types of speed profiles, for instance to increase SOC charging opportunities. The optimisation algorithm described herein combines inputs including: traffic control timing, behaviour of other vehicles, drivability considerations as well as road profile and traffic signage. The information originates from a variety of sources which may include V2I communication with traffic lights and other infrastructure, V2V communication with other vehicles, communication with an e-Horizon digital map database and in-vehicle sensors. In addition, the optimisation algorithm may consider a driver identification algorithm to improve ability to predict actions of the host vehicle's driver (e.g. by having observed the driver's past behaviour). Not all of the input information is directly usable for the algorithm. For example, in terms of V2V communication, other vehicles are typically sending out their current location and movements. However, as the optimisation algorithm is performing optimisation of the future vehicle actions, the V2V information needs to be extended by a prediction of how other vehicles are likely to behave in the future. Such a prediction may be done by means of formulating a set of rules that determines the predicted behaviour, for example specifying that target vehicles should follow speed limits and keep a safe distance to their preceding vehicles. The results of the optimisation may be used in a semi-autonomous longitudinal control feature that directly actuates the optimized speed trajectory and/or powertrain usage optimisation. In such a scheme, the optimized speed and powertrain usage profiles are sent directly to a controller that is responsible for final decision on the powertrain actions. The behaviour of other vehicles may differ from their predicted behaviour, for example a target vehicle15-nin front of the host vehicle1may decelerate unexpectedly. In such a case the vehicle speed control unit20may default back to a conventional radar-based ACC that would maintain a safe headway between the host vehicle1and the target vehicle15-n. Alternatively, or in addition, the optimisation results can be used is in a driver-advisory feature that recommends actions to the driver who is in control of the longitudinal motion. In this scenario, the optimised speed profile may be used to recommend actions that would be beneficial for energy efficiency, for example to accelerate to certain speed or to lift off the accelerator pedal. In such a case, it is likely that there will be a difference between the original optimized speed profile and the one resulting from the driver actions. It is therefore important that the optimisation results are adapted to the new situation, either with additional logic that compares the original planned trajectory and the actual one, or by simply rerunning the optimisation. It will be appreciated that various modifications may be made to the embodiment(s) described herein without departing from the scope of the appended claims. | 62,964 |
11858514 | DETAILED DESCRIPTION This disclosure is directed to techniques for generating top-down scene data for use in testing or simulating autonomous driving systems in a variety of driving situations and environments. In some examples, a generator component receives two-dimensional input data and receives map data associated with an environment. Based on the two-dimensional input data and the map data, the generator component generates top-down scene data. A discriminator component evaluates the generated top-down scene data to determine whether the generated top-down scene is real or generated by the generator component. Feedback based on the evaluation is provided to the generator component to improve the quality of the top-down scenes it generates. In some examples the generator component is a generative adversarial network (GAN) component. A GAN is a machine learning framework that uses multiple neural networks that compete with each other and, as a result of the competition, improve operation of the components in the network. As described herein, the generator component can compete with a discriminator component such that the operation of both the generator component and the discriminator component improve over time based on feedback of the competition to each component. In some examples, a first convolutional neural network (CNN) can receive multi-dimensional input data and map data associated with an environment. A top-down scene can be generated using the first CNN and based at least in part on the multi-dimensional input data and the map data. Scene data that includes the generated top-down scene and a real top-down scene is input to a second CNN. The second CNN can create binary classification data indicative of the individual scene appearing to be generated or real. The binary classification data can be provided as a loss to the first CNN and the second CNN. In some examples, the generated scene data may include object position data, object velocity data, and object state data, such as running/walking, vehicle lights, traffic light status, open door, and the like. In particular examples, a simulation scenario is generated based on the generated top-down scene. A response of a simulated vehicle controller is determined based at least in part on executing the simulation scenario. In some examples, a system may receive scene data associated with an environment proximate a vehicle. A CNN can evaluate the received scene data and determines whether the received scene data is real scene data to a scene generated by a generator component. If the received scene data is determined to be generated by the generator component, the system can generate a caution notification indicating that a current environmental situation is different from any previous situations. The caution notification may be communicated to a vehicle system and/or a remote vehicle monitoring system. The generated top-down scenes may be used when training or simulating an autonomous driving system. The generator component can generate any number of top-down scenes for training and simulation. These generated top-down scenes can be created faster and at a lower cost than capturing actual environment data using physical sensors and the like while still maintaining integrity (e.g., appearing to be a plausible scenario that may occur in a real environment). Additionally, the generator component can generate top-down scenes that are unusual and may be difficult to capture in an actual environment. Additionally, the generator component can generate specifically requested environments, such as low light on a wet roadway with multiple obstacles at specific locations. Thus, the generator component may create top-down scenes that address specific situations that need to be simulated or tested. The techniques described herein can be implemented in a number of ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle, the methods, apparatuses, and systems described herein can be applied to a variety of systems and are not limited to autonomous vehicles. In another example, the techniques can be utilized in any type of vehicle, robotic system, or any system using data of the types described herein. Additionally, the techniques described herein can be used with real data (e.g., captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination of the two. FIG.1is a schematic diagram illustrating an example implementation100to generate top-down scene data based on various inputs, in accordance with examples of the disclosure. As illustrated inFIG.1, a generative adversarial network (GAN) component102receives one or more of input data104, map data106, or vehicle data108. In some examples, input data104may be random two-dimensional input data (e.g., random two-dimensional vector data) that can be used as a seed during the training and/or operation of GAN component102. As discussed herein, the training of GAN component102may teach it to generate top-down scenes that are highly realistic (e.g., for the purposes of simulation). In some examples, these scenes generated by GAN component102may be used as simulation environments (or scenarios) when simulating the operation of autonomous vehicles or other systems. In some examples GAN component102may also receive safety surrogate metrics128, which may include data related to adverse events such as collisions, “near collision” situations, or other dangerous situations associated with the input data104, the map data106, and/or the vehicle data108. In particular examples, the safety surrogate metrics128may identify a safety risk, a degree of collision risk, a time to collision metric, or similar information. In some examples, when instructing GAN component102to generate scene data, the instructions may request scenes that are related to collisions or other dangerous situations. Examples of generating and applying safety information and safety metrics are provided in U.S. patent application Ser. No. 17/210,101, titled “Fleet Dashcam System For Autonomous Vehicle Operation,” filed Mar. 23, 2021, the entirety of which is herein incorporated by reference for all purposes. As shown inFIG.1, GAN component102may also receive map data106that can include various information related to an environment, such as an environment within which an autonomous vehicle may be operating. For example, map data106may include information related to objects in the environment, positions of the objects, direction of movement of the objects, velocity of the objects, roads in the environment, and the like. In some implementations, map data106may include information related to an autonomous vehicle navigating in the environment. Additionally, some map data106may include data from any number of autonomous vehicles, where the data is logged by the autonomous vehicles during their operation in different types of environments. The systems and methods described herein can generate any number of top-down scenes at any location on a map by varying the map data provided to GAN component102. Vehicle data108shown inFIG.1(also referred to as autonomous vehicle data) may include a position, direction of movement, speed, and/or historic information regarding the preceding of one or more autonomous vehicles110. In some examples, vehicle data108can correspond to map data106and identifies a position of vehicle110within the environment described with respect to map data106. In some examples, inputting the vehicle data108to the GAN102can condition the output of the GAN based on the vehicle108to provide more realistic scenarios. As shown inFIG.1, GAN component102can generate top-down scene data112based on one or more of input data104, map data106, or vehicle data108. The top-down scene data112can be generated by GAN component102and does not necessarily represent an actual scene. Instead, the top-down scene data112can represent a hypothetical scene that can have characteristics that are highly realistic and may be used as simulation environments when simulating the operation of autonomous vehicles or other systems. As disclosed herein, top-down scene data112can be iteratively generated to provide a more realistic scene representation that can be congruent with logged scenario data and thus can emulate a realistic driving scene to a high level of confidence. In some examples, autonomous vehicles may be tested and simulated using the top-down scene data112, which may be more efficient than capturing actual data using a vehicle or other system capable of capturing actual environments. In some examples, generating top-down scene data112can generate a variety of scenarios that are virtually limitless to expansively test a vehicle controller for safety validation. Additionally, GAN component102can generate top-down scene data112that may be difficult to capture in actual environments, such as unusual weather conditions, unusual traffic conditions, unusual object behavior, and the like. In some examples, top-down scene data112may include occupancy and attribute information for objects within the generated top-down scene. In particular examples, top-down scene data112may include any type of data that may be contained in an actual captured top-down scene and/or any other data that may be useful in analyzing or evaluating the top-down scene. Additionally, top-down scene data112may include multi-channel image data or vectorized data. In the example ofFIG.1, the top-down scene data112includes an object114and an autonomous vehicle116. Both object114and autonomous vehicle116are illustrated as being on a roadway approaching the same intersection. In some examples, the GAN component102may generate sensor data associated with one or more vehicles, such as autonomous vehicle116. For example, the GAN component102may generate video data, still image data, radar data, lidar data, audio data, environmental data, or any other type of sensor data associated with the environment near a vehicle. In a particular example, the GAN component102may generate multiple streams of image data as might be captured by multiple image sensors positioned at different locations on the vehicle. In some examples, the top-down scene data112may be provided to a simulation component118that can simulate operation of autonomous vehicles or other systems. Simulation component118can generate multiple discrete instances (e.g., frames) of scenario data120used in the simulation process. In some examples, scenario data120may include a sequence of frames showing a scene at different points in time. As shown inFIG.1, scenario data120includes a first frame122at a first time, a second frame124at a second time, and a third frame126at a third time. The three frames122,124, and126show the same top-down scene data with object114and autonomous vehicle116at different times. For example, the first frame122shows object114and autonomous vehicle116as they are approaching an intersection. Both object114and autonomous vehicle116are moving toward the intersection as indicated by the arrows indicating the direction of travel. The second frame124shows object114and autonomous vehicle116at a later time, where object114has entered the intersection and autonomous vehicle116is still moving toward the intersection. The third frame126shows object114and autonomous vehicle116at a later time, where object114has continued traveling through the intersection and autonomous vehicle116has stopped short of the intersection (as indicated by a lack of an arrow associated with autonomous vehicle116). In some examples, the three frames122,124, and126may represent at least a portion of a simulation. Examples of generating scenario data are provided in U.S. patent application Ser. No. 16/457,679, titled “Synthetic Scenario Generator Based on Attributes,” filed Jun. 28, 2019, the entirety of which is herein incorporated by reference for all purposes. FIG.2is a schematic diagram illustrating an example implementation200to generate top-down scene data based on multi-channel data and/or vectorized data, in accordance with examples of the disclosure. As illustrated inFIG.2, top-down scene data112discussed above with respect toFIG.1, may be created by GAN component102using one or both of multi-channel scene data202and vectorized scene data204. In some examples, multi-channel scene data202represents portions of top-down scene data112with different types of information. As shown inFIG.2, a first channel206shows object114as a block212and shows autonomous vehicle116as a block214. These blocks212and214correspond to the location of object114and autonomous vehicle116, respectively. A second channel208identifies a map that corresponds to the intersection shown in top-down scene data112. A third channel210provides another representation of object114(represented as item216) and autonomous vehicle116(represented as item218). In some examples, vectorized scene data204represents portions of top-down scene data112with vector information. As shown inFIG.2, vectorized scene data204includes a first vector portion220that corresponds to the intersection shown in top-down scene data112. A second vector portion222corresponds to the lanes in the intersection shown in top-down scene data112. A third vector portion224corresponds to object114shown in top-down scene data112. A fourth vector portion226corresponds to autonomous vehicle116shown in top-down scene data112. As discussed herein, GAN component102may receive multi-channel scene data202and/or vectorized scene data204. GAN component102uses the received scene data (along with additional random two-dimensional data) to generate top-down scene data112. In some examples, top-down scene data112may be partially based on multi-channel scene data202and/or vectorized scene data204. But, top-down scene data112does not represent an actual scene. Instead, top-down scene data112can be a hypothetical scene with characteristics that are highly realistic. FIG.3illustrates an example process300for training a generator and a discriminator, in accordance with examples of the disclosure. As illustrated inFIG.3, input data302is provided to a generator component304. In certain examples, input data302includes a random two-dimensional vector and map data. The map data may include, for example, a map rendering of an environment, object position information, object velocity information, and the like. The map data may also be randomized. In some examples, generator component304is equivalent to GAN component102shown inFIG.1. As shown inFIG.3, generator component304generates a generated top-down scene306based on the input data302. As discussed herein, the generator component304can generate any type of data and is not limited to generating top-down scene306. For example, the generator component304may generate video data, still image data, radar data, lidar data, audio data, environmental data, or any other type of sensor data associated with the environment near a vehicle. The generated top-down scene306can be provided to a discriminator component310which can evaluate the generated top-down scene306with a real example scene308to determine whether the generated top-down scene306appears to be real or generated (e.g., unrealistic). In some examples, discriminator component310is trained using the output of a binary classifier component312. Since discriminator component310can be provided with both real and generated scene data, it learns to distinguish between real and generated scenes. In some implementations, if the generated top-down scene306is similar to real example scene(s)308, discriminator component310may be “tricked” into believing that the generated top-down scene306is a real scene. However, if the generated top-down scene306is not similar to real example scene(s)308, the evaluation by discriminator component310may determine that the generated top-down scene306is a generated scene. The determination of discriminator component310(e.g., real or generated) is provided to the binary classifier component312, which knows whether the generated top-down scene306is generated. In some examples, real example scene308is used as a ground truth for training purposes. As shown inFIG.3, binary classifier component312can provide feedback to generator component304. This feedback may include whether discriminator component310was tricked into believing that the generated top-down scene306was real. This feedback provides confirmation to generator component304that the generated top-down scene306was highly realistic. Alternatively, if discriminator component310correctly identified the generated top-down scene306as generated, the generator component304learns from that feedback to improve the realism of future generated top-down scenes306. When generator component304is initially being trained, it may produce generated top-down scenes306that are not realistic. In some examples, generator component304continues to learn based on feedback from binary classifier component312. Over time, generator component304will learn to produce more realistic generated top-down scenes306that are suitable for simulation and other purposes. As illustrated inFIG.3, binary classifier component312can provide feedback to discriminator component310. This feedback may include whether discriminator component310correctly identified generated top-down scene306as real or generated. In some examples, discriminator component310continues to learn based on feedback from binary classifier component312. Over time, discriminator component310will learn to more accurately evaluate particular top-down scenes as real or generated. In some examples, discriminator component310implements a convolutional neural network that receives scene data and classifies the scene data as real or generated. Thus, the discriminator component310is trained to classify whether or not a scene comes from the same data as the training set. In some examples, generator component304and discriminator component310are trained simultaneously. In some examples, during the training process, discriminator component310can be presented with half generated top-down scenes (as discussed above) and half real top-down scenes (or any ratio of generated and real scenes). A label associated with each top-down scene (both generated and real) can indicate whether the top-down scene is real or generated. When discriminator component310outputs an incorrect classification, a gradient may be computed and discriminator component310can be updated to improve its accuracy with future data. Simultaneously, generator component304can be trained by considering scenes that discriminator component310classified as generated. Generator component304can use generated classification determination(s) to compute a loss and gradient which can, in turn, be used to improve generator component304's accuracy. Thus, both discriminator component310and generator component304may be trained and can be adversarial to each other. This training of both discriminator component310and generator component304can continue, for example, until the loss for both components310,304converges, at which point the generator component304may be considered as being trained. In some examples, discriminator component310can be executed by an autonomous vehicle or a remote vehicle monitoring system to identify situations where the autonomous vehicle is in a situation that is unusual (e.g., out of the ordinary) based on previously captured or analyzed situations. This use of discriminator component310is discussed further with respect toFIG.7. FIG.4illustrates an example process400for generating a top-down scene and evaluating that scene to determine whether the generated top-down scene is real or generated, in accordance with examples of the disclosure. The operations described herein with respect to the process400may be performed by various components and systems, such as the components illustrated inFIGS.1and3. By way of example, the process400is illustrated as a logical flow graph, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions may include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined (or omitted) in any order and/or in parallel to implement the process400. In some examples, multiple branches represent alternate implementations that may be used separately or in combination with other operations discussed herein. At operation402, the process may include receiving two-dimensional input data. In some examples, the received two-dimensional input data includes one or more random two-dimensional vectors. At operation404, the process may include receiving map data associated with an environment. In some examples, the map data includes information related to objects and roadways in the environment. At operation406, the process may include generating a top-down scene based on the two-dimensional input data and the map data using a first convolutional neural network (CNN). In some examples, the first CNN is associated with GAN component102. In some examples, the first CNN is referred to as a generator component herein. At operation408, the process may include inputting, to a second CNN, scene data including the generated top-down scene and a real top-down scene. In some examples, the second CNN is referred to as a discriminator component herein. At operation410, the process may include evaluating the generated top-down scene and the real top-down scene using the second CNN. At operation412, the process may include receiving, from the second CNN, binary classification data indicating whether the generated top-down scene is real or generated based on the evaluation performed at operation410. At operation414, the process may include providing the binary classification data as feedback to the first CNN and the second CNN. In some examples, the feedback is identified as a loss to the first CNN and the second CNN. FIGS.5and6illustrate example procedures500and600for processing inputs applied to generator component304and example outputs generated by generator component304based on the inputs, in accordance with examples of the disclosure. FIG.5illustrates an example process500in which generator component304receives two-dimensional input data502and generates road network layers504, object occupancy layers506, and object attributes layers508based on the two-dimensional input data502. In some examples, the three generated layers (road network layers504, object occupancy layers506, and object attributes layers508) represent the generated top-down scene. Although two-dimensional input data502is shown inFIG.5, in some examples, the input data to generator component304can be multi-dimensional (e.g., N-dimensional). When applying multi-dimensional input to generator component304, the output from generator component304may have the same number of dimensions. As discussed herein, additional types of data may be provided to generator component304and are not limited to two-dimensional or multi-dimensional input data502. FIG.6illustrates an example process600in which generator component304receives two-dimensional input data602, random road network layers604, and vehicle status data606. Vehicle status data606can include historic or current autonomous vehicle position, speed, direction information. Based on the two-dimensional input data602, random road network layers604, and vehicle status data606, the generator component304generates object occupancy layers608and object attributes layers610. Any combination of two-dimensional input data602, random road network layers604, and vehicle status data606can optionally be provided to generator component304. In certain examples, generator component304may generate and randomize data corresponding to any of two-dimensional input data602, random road network layers604, or vehicle status data606. Although two-dimensional input data602is shown inFIG.6, in some examples, the input data to generator component304can be multi-dimensional (e.g., N-dimensional). When applying multi-dimensional input to generator component304, the output from generator component304may have the same number of dimensions. As discussed herein, additional types of data may be provided to generator component304and are not limited to two-dimensional input data602, random road network layers604, and vehicle status data606. FIG.7illustrates an example process700for comparing scene data proximate a vehicle with previously analyzed scene data to identify out of the ordinary situations, in accordance with examples of the disclosure. The operations described herein with respect to the process700may be performed by various components and systems, such as the components illustrated inFIGS.1and3. By way of example, the process700is illustrated as a logical flow graph, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations may represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions may include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined (or omitted) in any order and/or in parallel to implement the process700. In some examples, multiple branches represent alternate implementations that may be used separately or in combination with other operations discussed herein. At operation702, the process may include receiving scene data associated with an environment proximate a vehicle. In some examples, the received scene data may be generated by a first CNN. In other examples, scene data may be obtained using one or more sensors associated with a vehicle. At operation704, the process may include inputting the scene data to a CNN discriminator associated with the vehicle. In some examples, the CNN discriminator was trained using a generator and a classification of the output of the CNN discriminator. Operation704may also receive an indication of whether the scene data is a generated scene or a captured scene. At operation706, the process may determine whether the scene data was indicated as a generated scene. If the received scene data is not indicated as a generated scene, then the process may return to702to receive the next scene data. In this situation, the received scene data is similar to previously analyzed scene data and, therefore, is not out of the ordinary. If, at operation706, the process determines that the received scene data is indicated as a generated scene, then the received scene data is out of the ordinary and the process branches to operation708. At operation708, the process may include generating a caution notification indicating that a current environmental situation proximate the vehicle is different from any previous situations. In certain examples, operation706can be performed by a discriminator component that has been trained as described herein. At operation710, the process may include communicating the caution notification to vehicle systems or remote vehicle monitoring systems. Since the current environmental situation is sufficiently different from any previous situations, the vehicle may need increased supervision to operate in a more cautious mode because it is navigating an out of the ordinary situation. In some examples, communicating the caution notification may include inputting an indication of the scene to a planning system associated with the vehicle. For example, the indication of the scene may indicate a high caution mode. Examples of systems and methods that provide guidance to a driverless vehicle are provided in U.S. Pat. No. 10,564,638, titled “Teleoperator Situational Awareness,” filed Jul. 7, 2017, the entirety of which is herein incorporated by reference for all purposes. At operation712, the process may include determining a vehicle action based on the caution notification. For example, the vehicle action may include controlling the vehicle (e.g., slowing down, increasing distance between objects and the vehicle in the environment), updating map data, identifying objects proximate the vehicle, adjusting confidence levels for various algorithms (e.g., classification algorithms, prediction algorithms, etc.), modifying a vehicle trajectory, slowing the vehicle, stopping the vehicle, and the like. In some examples, process700may, in response to determining that the received scene data is different from any previously received scene data, instruct the vehicle to log data associated with a vehicle status, log data associated with the environment proximate the vehicle, log the scene data, and the like. Additionally, process700may determine a risk associated with the scene data and train the first CNN based on the risk. As discussed herein, the first CNN may be a discriminator component of a trained GAN. In some examples, process700can receive sensor data from one or more sensors associated with the vehicle and determine the scene data based at least in part on the sensor data. In some implementations, a request for a command may be transmitted to a remote computing device, where the requested command may include a vehicle instruction or command related to a vehicle activity. In some examples, process700may input the indication of the scene to a prediction system associated with the vehicle along with the scene data, such that the indication may be used for future (e.g., downstream) processing. FIG.8depicts a block diagram of an example system800for implementing the techniques described herein. The vehicle802may include one or more vehicle computing devices804(also referred to as a vehicle computing device804or vehicle computing device(s)804), one or more sensor systems806, one or more emitters808, one or more communication connections810, at least one direct connection812, and one or more drive systems814. The vehicle computing device804may include one or more processors816and memory818communicatively coupled with the one or more processors816. In the illustrated example, the vehicle802is an autonomous vehicle; however, the vehicle802could be any other type of vehicle. In the illustrated example, the memory818of the vehicle computing device804stores a localization component820, a perception component822, one or more maps824, one or more system controllers826, a prediction component828, a planning component830, and a GAN component832. Though depicted inFIG.8as residing in memory818for illustrative purposes, it is contemplated that the localization component820, the perception component822, the one or more maps824, the one or more system controllers826, the prediction component828, the planning component830, and the GAN component832may additionally, or alternatively, be accessible to the vehicle802(e.g., stored remotely). In at least one example, the localization component820may include functionality to receive data from the sensor system(s)806to determine a position and/or orientation of the vehicle802(e.g., one or more of an x-, y-, z-position, roll, pitch, or yaw). For example, the localization component820may include and/or request/receive a map of an environment and may continuously determine a location and/or orientation of the autonomous vehicle within the map. In some instances, the localization component820may utilize SLAM (simultaneous localization and mapping), CLAMS (calibration, localization and mapping, simultaneously), relative SLAM, bundle adjustment, non-linear least squares optimization, or the like to receive image data, lidar data, radar data, IMU data, GPS data, wheel encoder data, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component820may provide data to various components of the vehicle802to determine an initial position of an autonomous vehicle for generating a trajectory and/or for generating or receiving map data, as discussed herein. In some instances, the perception component822may include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component822may provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle802and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional or alternative examples, the perception component822may provide processed sensor data that indicates one or more characteristics associated with a detected entity (e.g., a tracked object) and/or the environment in which the entity is positioned. In some examples, characteristics associated with an entity may include, but are not limited to, an x-position (global and/or local position), a y-position (global and/or local position), a z-position (global and/or local position), an orientation (e.g., a roll, pitch, yaw), an entity type (e.g., a classification), a velocity of the entity, an acceleration of the entity, an extent of the entity (size), etc. Characteristics associated with the environment may include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc. As shown inFIG.8, perception component822may include log data834that represents various data captured by systems and sensors of vehicle802and stored for future reference, such as analysis and simulation activities. The memory818may further include one or more maps824that may be used by the vehicle802to navigate within the environment. For the purpose of this discussion, a map may be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. In some instances, a map may include, but is not limited to: texture information (e.g., color information (e.g., RGB color information, Lab color information, HSV/HSL color information), and the like), intensity information (e.g., LIDAR information, RADAR information, and the like); spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)), reflectivity information (e.g., specularity information, retroreflectivity information, BRDF information, BSSRDF information, and the like). In one example, a map may include a three-dimensional mesh of the environment. In some instances, the map may be stored in a tiled format, such that individual tiles of the map represent a discrete portion of an environment, and may be loaded into working memory as needed, as discussed herein. In at least one example, the one or more maps824may include at least one map (e.g., images and/or a mesh). In some examples, the vehicle802may be controlled based at least in part on the map(s)824. In some examples, the one or more maps824may be stored on a remote computing device(s) (such as the computing device(s)842) accessible via network(s)840. In some examples, multiple maps824may be stored based on, for example, a characteristic (e.g., type of entity, time of day, day of week, season of the year, etc.). Storing multiple maps824may have similar memory requirements but increase the speed at which data in a map may be accessed. In at least one example, the vehicle computing device804may include one or more system controllers826, which may be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle802. These system controller(s)826may communicate with and/or control corresponding systems of the drive system(s)814and/or other components of the vehicle802. In some examples, the prediction component828may include functionality to generate one or more probability maps representing prediction probabilities of possible locations of one or more objects in an environment. For example, the prediction component828can generate one or more probability maps for vehicles, pedestrians, animals, and the like within a threshold distance from the vehicle802. In some instances, the prediction component828can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior. In some instances, the one or more probability maps can represent an intent of the one or more objects in the environment. In some examples, the planning component830may include functionality to determine a path for the vehicle802to follow to traverse through an environment. For example, the planning component830can determine various routes and paths and various levels of detail. In some instances, the planning component830can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For the purpose of this discussion, a route can be a sequence of waypoints for traveling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc. Further, the planning component830can generate an instruction for guiding the autonomous vehicle along at least a portion of the route from the first location to the second location. In at least one example, the planning component830can determine how to guide the autonomous vehicle from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction can be a path, or a portion of a path. In some examples, multiple paths can be substantially simultaneously generated (i.e., within technical tolerances) in accordance with a receding horizon technique. A single path of the multiple paths in a receding data horizon having the highest confidence level may be selected to operate the vehicle. In other examples, the planning component830can alternatively, or additionally, use data from the perception component822and/or the prediction component828to determine a path for the vehicle802to follow to traverse through an environment. For example, the planning component830can receive data from the perception component822and/or the prediction component828regarding objects associated with an environment. Using this data, the planning component830can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location) to avoid objects in an environment. In at least some examples, such a planning component830may determine there is no such collision free path and, in turn, provide a path which brings vehicle802to a safe stop avoiding all collisions and/or otherwise mitigating damage. In some examples, the GAN component832may include functionality to evaluate generated top-down scene data with real example scene data to determine whether the generated top-down scene is real or generated, as discussed herein. In some instances, aspects of some or all of the components discussed herein may include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory818(and the memory846, discussed below) may be implemented as a neural network. As described herein, an exemplary neural network is an algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network may also comprise another neural network or may comprise any number of layers (whether convolutional or not). As may be understood in the context of this disclosure, a neural network may utilize machine learning, which may refer to a broad class of such algorithms in which an output is generated based on learned parameters. Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning algorithms may include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet50, ResNet101, VGG, DenseNet, PointNet, and the like. In at least one example, the sensor system(s)806may include lidar sensors, radar sensors, ultrasonic transducers, sonar sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units (IMUs), accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, etc.), time of flight sensors, audio sensors, wheel encoders, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. The sensor system(s)806may include multiple instances of each of these or other types of sensors. For instance, the lidar sensors may include individual lidar sensors located at the corners, front, back, sides, and/or top of the vehicle802. As another example, the camera sensors may include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle802. The sensor system(s)806may provide input to the vehicle computing device804. Additionally, or alternatively, the sensor system(s)806may send sensor data, via the one or more networks840, to the one or more computing device(s) at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. The vehicle802may also include one or more emitters808for emitting light and/or sound, as described above. The emitters808in this example include interior audio and visual emitters to communicate with passengers of the vehicle802. By way of example and not limitation, interior emitters may include speakers, lights, signs, display screens, touch screens, haptic emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners, etc.), and the like. The emitters808in this example also include exterior emitters. By way of example and not limitation, the exterior emitters in this example include lights to signal a direction of travel or other indicator of vehicle action (e.g., indicator lights, signs, light arrays, etc.), and one or more audio emitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians or other nearby vehicles, one or more of which comprising acoustic beam steering technology. The vehicle802may also include one or more communication connection(s)810that enable communication between the vehicle802and one or more other local or remote computing device(s). For instance, the communication connection(s)810may facilitate communication with other local computing device(s) on the vehicle802and/or the drive system(s)814. Also, the communication connection(s)810may allow the vehicle to communicate with other nearby computing device(s) (e.g., other nearby vehicles, traffic signals, etc.). The communications connection(s)810also enable the vehicle802to communicate with a remote teleoperation computing device or other remote services. The communications connection(s)810may include physical and/or logical interfaces for connecting the vehicle computing device804to another computing device or a network, such as network(s)840. For example, the communications connection(s)810may enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.) or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s). In at least one example, the vehicle802may include one or more drive systems814. In some examples, the vehicle802may have a single drive system814. In at least one example, if the vehicle802has multiple drive systems814, individual drive systems814may be positioned on opposite ends of the vehicle802(e.g., the front and the rear, etc.). In at least one example, the drive system(s)814may include one or more sensor systems to detect conditions of the drive system(s)814and/or the surroundings of the vehicle802. By way of example and not limitation, the sensor system(s) may include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive systems, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive system, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive system, lidar sensors, radar sensors, etc. Some sensors, such as the wheel encoders may be unique to the drive system(s)814. In some cases, the sensor system(s) on the drive system(s)814may overlap or supplement corresponding systems of the vehicle802(e.g., sensor system(s)806). The drive system(s)814may include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which may be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port, etc.). Additionally, the drive system(s)814may include a drive system controller which may receive and preprocess data from the sensor system(s) and to control operation of the various vehicle systems. In some examples, the drive system controller may include one or more processors and memory communicatively coupled with the one or more processors. The memory may store one or more components to perform various functionalities of the drive system(s)814. Furthermore, the drive system(s)814also include one or more communication connection(s) that enable communication by the respective drive system with one or more other local or remote computing device(s). In at least one example, the direct connection812may provide a physical interface to couple the one or more drive system(s)814with the body of the vehicle802. For example, the direct connection812may allow the transfer of energy, fluids, air, data, etc. between the drive system(s)814and the vehicle. In some instances, the direct connection812may further releasably secure the drive system(s)814to the body of the vehicle802. In some examples, the vehicle802may send sensor data to one or more computing device(s)842via the network(s)840. In some examples, the vehicle802may send raw sensor data to the computing device(s)842. In other examples, the vehicle802may send processed sensor data and/or representations of sensor data to the computing device(s)842. In some examples, the vehicle802may send sensor data to the computing device(s)842at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. In some cases, the vehicle802may send sensor data (raw or processed) to the computing device(s)842as one or more log files. The computing device(s)842may include processor(s)844and a memory846storing a training component848, a simulation component850, and a GAN component852. In some examples, the training component848may include training data that has been generated by a simulator. For example, simulated training data may represent examples where testing audio sources in an environment, to provide additional training examples. In some examples, the simulation component850may simulate the operation of autonomous vehicles or other systems, as discussed herein. In particular examples, the GAN component852may evaluate generated top-down scene data with real example scene data to determine whether the generated top-down scene is real or generated, as discussed herein. The processor(s)816of the vehicle802and the processor(s)844of the computing device(s)842may be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s)816and844may comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that may be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices may also be considered processors in so far as they are configured to implement encoded instructions. Memory818and846are examples of non-transitory computer-readable media. The memory818and846may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory may be implemented using any suitable memory technology, such as static random-access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein may include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. In some instances, the memory818and846may include at least a working memory and a storage memory. For example, the working memory may be a high-speed memory of limited capacity (e.g., cache memory) that is used for storing data to be operated on by the processor(s)816and844. In some instances, the memory818and846may include a storage memory that may be a lower-speed memory of relatively large capacity that is used for long-term storage of data. In some cases, the processor(s)816and844may not operate directly on data that is stored in the storage memory, and data may need to be loaded into a working memory for performing operations based on the data, as discussed herein. It should be noted that whileFIG.8is illustrated as a distributed system, in alternative examples, components of the vehicle802may be associated with the computing device(s)842and/or components of the computing device(s)842may be associated with the vehicle802. That is, the vehicle802may perform one or more of the functions associated with the computing device(s)842, and vice versa. Example Clauses A. A system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions executable by the one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising: receiving, at a first convolutional neural network (CNN), two-dimensional input data and map data of an environment; generating, using the first CNN and based at least in part on the two-dimensional input data and the map data, a generated top-down scene including occupancy and attribute information for objects within the generated top-down scene; inputting, to a second CNN, scene data comprising the generated top-down scene and a real top-down scene including occupancy and attribute information for objects within the real top-down scene; receiving, from the second CNN, binary classification data indicative of whether an individual scene in the scene data is classified as generated or classified as captured; and providing the binary classification data as a loss to the first CNN and the second CNN. B. The system of paragraph A, wherein: the attribute information for objects within the generated top-down scene includes at least one of object position data, object velocity data, or object state data. C. The system of paragraph A or B, the operations further comprising: generating a simulation scenario based on the generated top-down scene; and determining a response of a simulated vehicle controller based at least in part on executing the simulation scenario. D. The system of any of paragraphs A-C, wherein: the generated top-down scene includes at least one of multi-channel image data or vectorized data. E. The system of any of paragraphs A-D, the operations further comprising: providing safety surrogate metrics to the first CNN to condition the generated top-down scene. F. A method, comprising: receiving, at a generator component, multi-dimensional input data; generating, using the generator component and based at least in part on the multi-dimensional input data, a generated top-down scene; inputting, to a discriminator component, scene data comprising the generated top-down scene and a real top-down scene; receiving, from the discriminator component, binary classification data indicative of whether an individual scene in the scene data is classified as generated or classified as captured; and providing the binary classification data as a loss to the generator component and the discriminator component. G. The method of paragraph F, wherein: the generated top-down scene includes object position data associated with an object and velocity data associated with the object. H. The method of paragraph F or G, further comprising: generating a simulation scenario based on the generated top-down scene; and determining a response of a simulated vehicle controller based at least in part on executing the simulation scenario. I. The method of any of paragraphs F-H, wherein: the generated top-down scene includes at least one of multi-channel image data or vectorized data. J. The method of any of paragraphs F-I, further comprising: providing autonomous vehicle data to the generator component to generate the generated top-down scene. K. The method of paragraph J, further comprising: conditioning the generated top-down scene based on a state of an autonomous vehicle. L. The method of any of paragraphs F-K, wherein: the generator component includes a first convolutional neural network (CNN). M. The method of any of paragraphs F-L, wherein: the discriminator component includes a second CNN. N. The method of any of paragraphs F-M, further comprising: inputting map data to the generator component, wherein the map data includes information related to objects and roadways in an environment. O. The method of any of paragraphs F-N, wherein: the multi-dimensional input data includes random multi-dimensional vector data. P. One or more non-transitory computer-readable media storing instructions that, when executed, cause one or more processors to perform operations comprising: receiving, at a generator component, multi-dimensional input data and map data associated with an environment; generating, using the generator component and based at least in part on the multi-dimensional input data, a generated top-down scene; inputting, to a discriminator component, scene data comprising the generated top-down scene and a real top-down scene; receiving, from the discriminator component, binary classification data indicative of whether an individual scene in the scene data is classified as generated or classified as captured; and providing the binary classification data as a loss to the generator component and the discriminator component. Q. The one or more non-transitory computer-readable media of paragraph P, wherein: the generated top-down scene includes object position data associated with an object and velocity data associated with the object. R. The one or more non-transitory computer-readable media of paragraph P or Q, wherein the operations further comprise: generating a simulation scenario based on the generated top-down scene; and determining a response of a simulated vehicle controller based at least in part on executing the simulation scenario. S. The one or more non-transitory computer-readable media of any of paragraphs P-R, wherein the operations further comprise: providing autonomous vehicle data to the generator component to generate the generated top-down scene. T. The one or more non-transitory computer-readable media of paragraph S, wherein the operations further comprise: conditioning the scene data based on a state of an autonomous vehicle. U. A system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions executable by the one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising: receiving scene data associated with an environment proximate a vehicle; inputting the scene data to a convolutional neural network (CNN) discriminator trained using a generator and a classification of an output of the CNN discriminator; receiving, from the CNN discriminator, an indication of whether the scene data is a generated scene or a captured scene; responsive to an indication that the scene data is a generated scene: generating a caution notification indicating that a current environmental situation is different from any previous situations; and communicating the caution notification to at least one of a vehicle system or a remote vehicle monitoring system. V. The system of paragraph U, wherein: during training of the CNN discriminator, binary classification data associated with the scene data is provided as a loss to the CNN discriminator. W. The system of paragraph U or V, wherein: the scene data includes multiple channels of top-down image data. X. The system of paragraph W, wherein: the multiple channels of top-down image data include an object, position data associated with the object, and velocity data associated with the object. Y. The system of any of paragraphs U-X, the operations further comprising:determining a vehicle action based on the caution notification, wherein the action includes at least one of controlling the vehicle, updating map data, or identifying an object proximate the vehicle. Z. A method comprising: receiving scene data associated with an environment proximate a vehicle; inputting the scene data to a convolutional neural network (CNN) discriminator trained using a generator and a classification of an output of the CNN discriminator; receiving, from the CNN discriminator, an indication of whether the scene data is a generated scene or a captured scene; responsive to an indication that the scene data is a generated scene: generating a caution notification indicating that a current environmental situation is different from any previous situations; and communicating the caution notification to at least one of a vehicle system or a remote vehicle monitoring system. AA. The method of paragraph Z, wherein: the scene data includes multiple channels of top-down image data. AB. The method of paragraph AA, wherein: the multiple channels of top-down image data include an object, position data associated with the object, and velocity data associated with the object. AC. The method of any of paragraphs Z-AB, further comprising: determining a vehicle action based on the caution notification. AD. The method of paragraph AC, wherein: the vehicle action includes at least one of modifying a vehicle trajectory, slowing the vehicle, or stopping the vehicle. AE. The method of paragraph AC or AD, wherein: the vehicle action includes at least one of logging data associated with a vehicle status, logging data associated with the environment proximate a vehicle, or logging the scene data. AF. The method of any of paragraphs Z-AE, further comprising: determining a risk associated with the scene data; and determining at least one safety surrogate metric associated with the scene data. AG. The method of paragraph AF, wherein: the safety surrogate metric is used to train the CNN discriminator. AH. The method of any of paragraphs Z-AG, further comprising: receiving sensor data from a sensor associated with the vehicle; and determining the scene data based at least in part on the sensor data. AI. The method of any of paragraphs Z-AH, further comprising: transmitting a request for a command to a remote computing device based on determining that the scene data is determined by the CNN discriminator to be a generated scene. AJ. The method of any of paragraphs Z-AI, further comprising: inputting an indication of a scene to a planning system associated with the vehicle, wherein the indication of the scene is a high caution mode. AK. One or more non-transitory computer-readable media storing instructions that, when executed, cause one or more processors to perform operations comprising: receiving scene data associated with an environment proximate a vehicle; inputting the scene data to a convolutional neural network (CNN) discriminator trained using a generator and a classification of an output of the CNN discriminator; receiving, from the CNN discriminator, an indication of whether the scene data is a generated scene or a captured scene; responsive to an indication that the scene data is a generated scene: generating a caution notification indicating that a current environmental situation is different from any previous situations; and communicating the caution notification to at least one of a vehicle system or a remote vehicle monitoring system. AL. The one or more non-transitory computer-readable media of paragraph AK, wherein the operations further comprise: determining a vehicle action based on the caution notification. AM. The one or more non-transitory computer-readable media of paragraph AL, wherein: the vehicle action includes at least one of modifying a vehicle trajectory, slowing the vehicle, or stopping the vehicle. AN. The one or more non-transitory computer-readable media of any of paragraphs AK-AM, wherein the operations further comprise: determining a risk associated with the scene data; and determining at least one safety surrogate metric associated with the scene data. While the example clauses described above are described with respect to one particular implementation, it should be understood that, in the context of this document, the content of the example clauses can also be implemented via a method, device, system, computer-readable medium, and/or another implementation. Additionally, any of examples A-AN may be implemented alone or in combination with any other one or more of the examples A-AN. CONCLUSION While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein can be presented in a certain order, in some cases the ordering can be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results. | 67,260 |
11858515 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily carry out the embodiments. The present invention may, however, be embodied in many different forms, and should not be construed as being limited to the embodiments set forth herein. In the drawings, parts irrelevant to the description of embodiments of the present invention will be omitted for clarity. Like reference numerals refer to like elements throughout the specification. Throughout the specification, when a certain part “includes” or “comprises” a certain component, this indicates that other components are not excluded, and may be further included unless otherwise noted. The same reference numerals used throughout the specification refer to the same constituent elements. Before explaining a vehicle and a driving control method thereof according to embodiments of the present invention, the concept of a turning radius for turning of a vehicle, a route generation method, and problems pertaining thereto will be described. FIG.1is a diagram for explaining the turning radius of a vehicle. The turning radius modeling of a vehicle shown inFIG.1is based on the Ackerman front-wheel steering mechanism. A steering angle δ is an angle formed by an extension line of the center of a knuckle arm of an imaginary wheel110between two front wheels and an extension line of the center of a knuckle arm of an imaginary wheel120between two rear wheels, and a point at which the two extension lines intersect is a center of turning. In this case, the relationship between the parallel distance RRbetween the center of the vehicle and the center of turning, the wheelbase1of the vehicle, and the steering angle S is as shown in Equation 1 below. tan(δ)=lRR,RR=lcot(δ)Equation1 Based on Equation 1 above, the distance between the center of the vehicle with respect to the overall width and overall length of the vehicle and the center of turning, that is, the turning radius R, may be expressed as in Equation 2. R=a2+RR2=a2+(lcot(δ))2Equation2 In Equation 2, “a” represents the distance between the center of the vehicle and the rear axle. Equation 2 may be transformed into Equation 3 below. cot(δ)2=R2-a2l2Equation3 Based on Equation 3 above, the steering angle δ may be expressed as in Equation 4 below. δ=atan(1R2-a2l2)Equation4 As a result, in Equation 4, since “1” and “a” are fixed values with respect to the vehicle, if the turning radius R is known, the steering angle δ can be obtained. FIG.2is a diagram for explaining a general turning situation of a vehicle. Referring toFIG.2, when a vehicle100having a general size is following a driving route in an autonomous driving mode, if the driving route includes a turning route having a sharp curve, for example, a 90-degree right turn, the turning radius RControlPointof the turning section may be obtained using a waypoint of the current driving lane and a waypoint of the target driving lane after the right turn. If the turning radius is obtained, the steering angle for following the corresponding turning radius may be obtained as in Equation 4. At this time, when the general vehicle100turns at the corresponding steering angle, the minimum turning radius Rmin(corresponding to RRRinFIG.1), which is the turning radius of the inner rear wheel, is larger than a required turning radius RReq_min(i.e. Rmin>RReq_min), and thus the general vehicle100is capable of turning without any problems. Here, the required turning radius RReq_minis the minimum turning radius required for the inner rear wheel, which is located closest to the center of turning in the turning section in the driving route, to avoid deviating from the driving lane to the inside in the turning direction (i.e. to avoid movement beyond the inner boundary of the road). The required turning radius RReq_minmay be obtained as in Equation 5 below. RReq_min=RControlPoint-Road_width2Equation5 In Equation 5, “Road_width” refers to a value set in consideration of the width of the lane and the margin of the road. However, this method is problematic in the case of commercial vehicles having a relatively long overall length, such as buses or trucks. This will be described with reference toFIG.3. FIG.3is a diagram for explaining a problem that occurs when a commercial vehicle turns. Referring toFIG.3, when the turning radius RControlPointof the center ControlPoint of a vehicle200with respect to the overall length and overall width of the vehicle is obtained in the same manner as that shown inFIG.2, the required turning radius RReq_minis dependent on the shape of the road and thus is a fixed value, but the value of “a” of a vehicle having a relatively long overall length increases. Therefore, according to Equation 2, the minimum turning radius Rmindecreases for the same turning radius RControlPoint. As a result, in the case of “Rmin<RReq_min”, the inner rear wheel of the vehicle moves beyond the inner boundary of the road. In order to solve this problem, according to an embodiment of the present invention, whether the minimum turning radius is larger than the required turning radius is determined, and if not, the center ControlPoint of the vehicle with respect to the overall length and overall width of the vehicle is moved in the width direction of the lane so that the minimum turning radius becomes larger than the required turning radius, whereby a corrected route having a larger turning radius is generated, and the vehicle is controlled to follow the corrected route. This will be described below with reference toFIG.4. FIG.4is a diagram for explaining correction of a route of a vehicle according to an embodiment of the present invention. Referring toFIG.4, in the case of “Rmin<RReq_min” when the vehicle200follows an existing waypoint-based driving route, the center ControlPoint of the vehicle is moved by ΔR in the width direction of the lane toward the outside in the turning direction, whereby a corrected route having a larger turning radius R′ControlPointis generated, and thus the condition “R′min>RReq_min” is satisfied.FIG.4illustrates the state in which each of the driving route before turning and the target route after turning is moved by ΔR. In some embodiments, however, only the driving lane may be corrected, or only the target route may be corrected. That is, the movement of the center ControlPoint of the vehicle may be performed on at least one of the driving route or the target route. Hereinafter, the configuration of an autonomous driving apparatus and a driving control method using the same for generating the corrected route described above with reference toFIG.4and for controlling the vehicle to follow the corrected route will be described with reference toFIGS.5to7. FIG.5is a block diagram showing an example of the configuration of a vehicle according to an embodiment of the present invention. Referring toFIG.5, the vehicle according to the embodiment may include a driving control apparatus500, and the driving control apparatus500may include an information acquisition unit510, a route generator520, and a driving controller530. The information acquisition unit510, the route generator520, and the driving controller530may perform communication through a vehicle network, and the vehicle network may include any of various in-vehicle communication systems, such as controller area network (CAN), CAN with flexible data rate (CAN-FD), FlexRay, media-oriented systems transport (MOST), and time-triggered Ethernet (TI Ethernet). However, the above are given merely by way of example, and the embodiment is not limited thereto. The information acquisition unit510may include a detector511, a position recognizer512, and a high-definition map transmitter513. The detector511may include an outer sensor for sensing information on the environment surrounding the vehicle in real time and an inner sensor for measuring information on the state of the vehicle. The outer sensor may include an image sensor and a distance measurement sensor, which are installed on at least one of the front side, the lateral side, or the rear side of the vehicle. The image sensor may collect information on the images of the surroundings of the vehicle captured by an optical system, and may perform image processing, such as removal of noise, adjustment of image quality and saturation, and file compression, on the image information. The distance measurement sensor may measure the distance between the vehicle and an object or the relative speed of the object, and may be implemented as a radio detection and ranging (RaDAR) sensor or a light detection and ranging (LiDAR) sensor. A radar sensor measures the distance to an object present in the vicinity of the vehicle, the heading of the object, the relative speed of the object, and the altitude of the object using electromagnetic waves, and is capable of achieving long-distance recognition and performing the functions thereof in bad weather. A LiDAR sensor radiates a laser pulse toward a region ahead of the vehicle on the road and generates point-shaped LiDAR data from a laser pulse reflected from the object. Such a LiDAR sensor has a precise resolution, and thus is mainly used to detect an object present in the vicinity of the vehicle. The inner sensor may include a speed sensor, an acceleration sensor, and a steering angle sensor for respectively measuring the current speed, the acceleration, and the steering angle of the vehicle, and may periodically collect information on the states of various actuators. The position recognizer512may serve to recognize the position of the host vehicle. To this end, the position recognizer512may include a global positioning system (GPS) receiver. The GPS receiver is a sensor configured to estimate the geographic position of the vehicle. The GPS receiver may receive a navigation message from a GPS satellite located far from the surface of the earth, and may collect information on the current position of the vehicle in real time based thereon. The high-definition map transmitter513may have stored therein in advance a high-definition map, in which road information, such as the shape, curvature, gradient, and slope of a road, and position information corresponding to the road information are recorded, in the form of a database. The high-definition map may include road network data composed of nodes and lane links. Here, the node refers to a point at which the attributes of a road change, like an intersection or a junction. The lane link refers to a line that linearly connects roads located between nodes, i.e. a center line of a lane. The road network data includes information about lanes, which is formed by measuring in advance the physical properties (e.g. width, curvature, gradient, and slope) of each of the lanes belonging to the roads and digitizing the same. The road network data may be automatically updated periodically through wireless communication, or may be manually updated by a user. The route generator520may include a road turning radius calculator521, a vehicle turning radius calculator522, a corrected route determiner523, and a risk determiner524. The road turning radius calculator521determines whether a turning section is present ahead along the driving route based on the information acquired from the information acquisition unit510. Upon determining that a turning section is present, the road turning radius calculator521calculates a turning radius according to the characteristics of the road. For example, the road turning radius calculator521may obtain the turning radius RControlPointand the required turning radius RReq_minof the turning section using the waypoint of the current lane and the waypoint of the target lane after turning. Since the method of obtaining the turning radius RControlPointand the required turning radius RReq_minis the same as described above with reference to Equations 1 to 5, a duplicate description thereof will be omitted. The vehicle turning radius calculator522may obtain the maximum turning radius Rmaxand the minimum turning radius Rminbased on the turning radius RControlPointcalculated by the road turning radius calculator521in consideration of the overall length of the vehicle. The minimum turning radius Rmincorresponds to RRRinFIG.1, i.e. the turning radius of the inner rear wheel, and may be obtained in a manner of obtaining RRbased on the steering angle obtained through the turning radius RControlPointand then subtracting half the overall width of the vehicle from RR. In addition, the maximum turning radius Rmaxis the distance from the center of turning to the outer front wheel, and may be obtained based on the relationship between the sum of RRand half the overall width of the vehicle (RR+half the overall width) and the wheelbase1. The corrected route determiner523determines whether the minimum turning radius Rminis larger than the required turning radius RReq_min. Upon determining that the minimum turning radius Rminis not larger than the required turning radius RReq_min, the corrected route determiner523moves the center ControlPoint of the vehicle by ΔR in the width direction of the lane toward the outside in the turning direction with respect to at least one of the driving route or the target route, thereby generating a corrected route, as described above with reference toFIG.4. At this time, the correction amount ΔR may be obtained as in Equation 6 below. ΔR=Rmin−RReq_min+RmarginEquation 6 In Equation 6, “Rmargin” is a margin value that is tuned in consideration of a control error and a vehicle movement prediction error. As a result, the turning radius R′ControlPointaccording to the corrected route is determined as in Equation 7 below. R′ControlPoint=RControlPoint+ΔREquation 7 The risk determiner524may set a collision determination region based on the maximum turning radius and the minimum turning radius when the vehicle follows the corrected route, and may determine the likelihood of a collision in the set region based on the information acquired from the information acquisition unit510. The concrete process of setting the collision determination region will be described later with reference toFIG.7. Meanwhile, the driving controller530may control the steering system, the power system, and the braking system of the vehicle such that the vehicle follows the route generated by the route generator520or the corrected route. The driving control process for securing stable turning of a vehicle using the above-described driving control apparatus500will be described below with reference toFIG.6. FIG.6is a flowchart of an example of the process of controlling driving of a vehicle according to an embodiment of the present invention. Referring toFIG.6, the route generator520may determine whether a turning section is present ahead along the driving route based on the information acquired from the information acquisition unit510(S610). When a turning section is present (Yes in S610), the route generator520may calculate a turning radius RControlPointbased on the waypoint of the existing route (S620). In addition, the route generator520may set a required turning radius RReq_minaccording to the turning radius RControlPoint(S630), and may calculate a maximum turning radius Rmaxand a minimum turning radius Rminin consideration of the overall length and overall width of the vehicle (S640). The route generator520may determine whether the minimum turning radius Rminis larger than the required turning radius RReq_min(S650). When the minimum turning radius Rminis larger than the required turning radius RReq_min(Yes in S650), the route generator520may determine the risk of the target route. Thereafter, the driving controller530may perform control for driving along the route (S670). In some embodiments, when the existing route is not corrected, the determination of the risk may be omitted. On the other hand, when the minimum turning radius Rminis not larger than the required turning radius RReq_min(No in S650), the route generator520may correct the existing route (S660). Accordingly, the route generator520may determine the risk of the corrected target route. Thereafter, the driving controller530may perform control for driving along the route (S670). FIG.7is a diagram showing an example of determination of the risk of a corrected route according to an embodiment of the present invention. Referring toFIG.7, when a commercial vehicle200turns along the corrected route, the maximum turning radius Rmaxalso increases, so a portion of the commercial vehicle200may deviate from the lane corresponding to the target route to a lane adjacent thereto. Therefore, the risk determiner524may set a determination region based on the outer lane of the corrected target route and the maximum turning radius, may determine the risk of a collision with an obstacle in the determination region, and may transmit a determination as to whether to follow the corrected target route to the driving controller530. At this time, the maximum turning radius needs to be determined based on the turning radius R′ControlPointof the corrected route, rather than the turning radius of the existing route. According to the embodiments described above, when a vehicle having a relatively long overall length, such as a bus or a truck, is driven in an autonomous driving mode and makes a turn with a large turning radius, for example, a 90-degree right turn, the vehicle is capable of following a route without colliding with a boundary part of the road. In addition, in the process of controlling a vehicle to follow a route including a turning section at which the vehicle turns with a large turning radius, the risk of a collision with an obstacle in a region outside the target lane is determined, whereby the safety of the vehicle is ensured. Embodiments of the present invention may be implemented as code that can be written on a computer-readable recording medium and thus read by a computer system. The computer-readable recording medium includes all kinds of recording devices in which data that may be read by a computer system are stored. Examples of the computer-readable recording medium include a Hard Disk Drive (HDD), a Solid-State Disk (SSD), a Silicon Disk Drive (SDD), Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disk ROM (CD-ROM), a magnetic tape, a floppy disc, and an optical data storage. As is apparent from the above description, a vehicle associated with at least one embodiment of the present invention, configured as described above, is capable of safely making a turn when following a route generated based on a high-definition map. In particular, when a vehicle turns along a route generated based on a high-definition map, it is possible to predict the likelihood of a collision. When a collision is predicted to occur, the route is corrected through determination of the risk so as to provide a large turning space for avoiding a collision, thereby ensuring the safety of the vehicle. However, the effects achievable through embodiments of the present invention are not limited to the above-mentioned effects, and other effects not mentioned herein will be clearly understood by those skilled in the art from the above description. It will be apparent to those skilled in the art that various changes in form and details may be made without departing from the spirit and essential characteristics of the invention set forth herein. Accordingly, the above detailed description is not intended to be construed to limit the invention in all aspects and is to be considered by way of example. The scope of the invention should be determined by reasonable interpretation of the appended claims and all equivalent modifications made without departing from the invention should be included in the following claims. While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments. | 20,508 |
11858516 | In the drawings, like reference numerals are sometimes used to designate like structural elements. It should also be appreciated that the depictions in the Figures are diagrammatic and not necessarily to scale. DETAILED DESCRIPTION The present application will now be described in detail with reference to a few non-exclusive embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art, that the present discloser may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present disclosure. Referring toFIG.1, a profiler system10mounted on a host vehicle12is illustrated. The profiler system10includes a primary height sensor14, a secondary height sensor16, a vertical accelerometer18, a Distance Measurement Instrument (DMI)20, a main control unit22, a Global Navigation Satellite System (GNSS) receiver24, an Inertial Navigation System (INS)26and a computer28, such as a laptop, tablet computer, smart phone or other computing device, typically (although not necessarily) located in the cabin of the host vehicle12. In alternative embodiments, the computer28can be a desktop or server computer provided at another location, such as an office. In various embodiments, the DMI20can be wheel-based (as illustrated) or located elsewhere, such as on another location on the host vehicle12, such as a bumper, door or other body panel, axle etc. In addition, the DMI20can either be an encoder, a GPS-based distance measurement device, an Onboard Diagnostic Signal (e.g., an OBD-II signal), an INS based distance measuring device, a radar sensor, or a combination of some or all of these devices. In various embodiments, the INS26includes an Inertial Measurement Unit (IMU) (not illustrated inFIG.1) and the GNSS receiver24. In yet other embodiments, the INS26may also include the DMI20, or alternatively, the DMI20can be separate as illustrated. In other embodiments, an inclinometer, multi-axis accelerometer, gyroscope, or any other type of tilt sensor can be used in place of the INS26. As described in more detail below, the profiler system10is capable of blending inertial profile data and running slope data, enabling the generation of surface profiles with no minimum speed requirement. As a result, the profiler system10is capable of generating surface profiles previously not possible, including during stoppages, during accelerations, during decelerations, or at very slow speeds, such as below minimum thresholds typically required by prior art profilers and without lead-in or lead-out sections. Single Track Versus Multiple Tracks The embodiment shown inFIG.1is a “single-track” configuration, meaning the primary height sensor14, the secondary height sensor16, and the vertical accelerometer18are all longitudinally mounted on the vehicle12, typically along a first track. With a double-track implementation, an additional set of height sensors and an accelerometer are longitudinally arranged on the vehicle, usually along the second track. For third or additional track implementations, then a similar set of height sensors and an accelerometer are used for each track respectively. Each track can be laterally positioned anywhere on the vehicle. Multiple tracks are typically laterally spaced apart by some known distance. For the sake of simplicity, profile generation of just a single-track system is described below. In multiple-track implementations, multiple profiles, one for each track, are generated in parallel using the same or a similar method as described below. Single Track Road Surface Profile Generation Referring toFIG.2A, a block diagram of the main control unit22of the profiler system10is illustrated. The main controller unit22includes a running slope calculation unit32, an inertial profile calculation unit34and a data blending element36. The running slope calculation unit32is arranged to receive inputs from the primary height sensor14, the secondary height sensor16, the DMI20, and vehicle pitch data39generated by the INS26. The primary and secondary height sensors14and16are each arranged to measure the relative height of the host vehicle to the ground surface respectively. The DMI20is arranged to measure the incremental longitudinal distance of the host vehicle12while traveling over a surface. The INS26combines data from the GNSS24and an IMU38to generate the vehicle pitch data39. Inertial Navigation Systems (INS) are typically designed and used for measuring vehicle body motion and position where significant vehicle dynamics are present. The DMI20can also be used to help aid the INS26. Alternatively, as noted above, an inclinometer, multiple-axis accelerometer, gyroscope, or any other type of tilt sensor could be used instead of the INS26to obtain vehicle pitch data39. The running slope calculation unit32generates a time-based collection of angles of the surface traveled by the host vehicle12by:1. Calculating the height difference between the measured heights of the primary height sensor14and the secondary height sensor16;2. Determining an angle between the two height sensors14,16by dividing the height difference by the physical distance between the two sensors14,16. For instance, if the two sensors14,16are longitudinally arranged a distance “A” apart on the host vehicle12, then the height difference is divided by “A” to calculate the angle between the two sensors. It should be understood that longitudinal distance “A” between the two sensors14,16may be any longitudinal distance on the host vehicle12. In non-exclusive embodiments, the distance is 12 inches or 8.375 inches. Regardless of the distance “A”, the actual longitudinal distance is used to divide the height differential between the two sensors14,16to derive the angle;3. The calculated angle is then combined with vehicle pitch data39to obtain the running slope angle (or “running slope data”) of the surface between the two height sensors14,16. The inertial profile calculation unit34receives inputs from the first height sensor14, the vertical accelerometer18and the DMI20. The inertial profile calculation unit34double-integrates the vertical accelerometer sample data18on a time basis to get the time-based relative vehicle profile. The data from the primary height sensor14is then added to the time-based relative vehicle profile to obtain a time-based inertial profile of the road surface traveled by the host vehicle12. The data blending element36is responsible for combining (a) the running slope data as generated by the running slope calculation unit32and (b) the inertial profile as generated by the inertial profile calculation unit34. The running slope profile is generally less capable of measurements at shorter wavelengths. The distance between the two height sensors14,16limits the capability of the running slope profile to accurately measure any wavelength less than the distance (e.g., 12 inches or 8.375 inches for the embodiments mentioned above). The running slope profile is, therefore, more accurate on longer wavelengths without profile drift. On the other hand, the inertial surface elevation profile tends to be more accurate at the shorter wavelengths, but tends to drift over longer wavelengths. The data blending element36therefore:(1) Filters out inaccurate short wavelength components from the running slope data by applying a filter to obtain long wavelength running slope data;(2) Re-samples the running slope data to the distance domain;(3) Integrates the running slope distance-based data to obtain a distance-based running slope profile;(4) Filters the inertial profile to remove long wavelength;(5) Re-samples the inertial profile data to the distance domain; and(6) Adds the long wavelength running slope profile to the short wavelength inertial profile. The net result of the data blending is the generation of an accurate “zero-speed” surface profile40, regardless of the speed of the host vehicle. In other words, an accurate surface profile can be generated both (a) when there are vehicle stoppages, accelerations, and decelerations and (b) at very low speeds below a minimum speed, such as 5, 10 or 15 mph, as commonly required with prior art profilers. Data Blending Algorithm Referring toFIG.2B, the data processing steps for the data blending performed by the data blending element36are illustrated. The time-based running slope data50, generated by the running slope calculation unit32, is processed through a a filter52for eliminating short wavelength components from the running slope data50, resulting in filtered running slope data54with only long wavelength components. The filtered running slope data collection54is then sampled to the distance domain and integrated in step56, resulting in a filtered running slope profile58. As described in more detail below, the filtering may be performed in either the time or distance domains. The inertial profile60, generated by the inertial profile calculation unit34, is processed with a filter62that removes the long wavelength components from the inertial profile60. As a result, a filtered inertial profile64with only short wavelength components is generated. Again, the filtering can be performed in either the time or distance domains. Finally, in the data blending66, the filtered running slope profile58is added to the filtered inertial profile64. As a result, the shorter wavelengths of the inertial profile60are blended with the longer wavelengths of the running slope data50. Since the running slope50is more accurate on longer wavelengths while the inertial profile60tends to be more accurate at the shorter wavelengths, the combination or “blending” of the two results in a more accurate final surface profile with minimal to no drift. During operation of the profiler system10, data is collected to generate both the running slope profile50and the inertial profile60. Once the data collection has been completed, the process above is executed by the main controller22and/or computer28. As a result, a zero-speed road surface profile40is generated, regardless of the speed of the host vehicle12, including for data collected during stops, during acceleration or deceleration of the host vehicle, or when the host vehicle12is traveling at a very low speed (e.g., 5 MPH or less), such as less than minimum speed requirements needed by prior art profilers, and also without any need for lead-in or lead-out distances. The above described process is for a profiler system configured to measure a single track of surface data. In the case of multiple tracks, then the above process is essentially repeated in parallel for the multiple track(s) using data samples collected for each multiple track respectfully. In a non-exclusive embodiment, the zero-speed profile40can be generated in essentially real time, meaning as the data is collected, it is processed “on the fly” by either the main control unit22and/or the computer28. In alternative embodiments, the data processing can be performed elsewhere. For example, during profile runs, the data is collected as described herein and stored in either (or both) the main control unit22and/or the computer28. The data processing for generating the zero-speed profile40can then later be performed by either the main control unit22or the computer28. As previously noted, the computer28does not necessarily have to be located in the host vehicle12, but rather can be located at a remote office. In such circumstances, the data is typically collected in the field and stored. The data is then later transferred to the computer28, regardless of where located, and the data processing is performed, resulting in the zero-speed profile40. Filters In one non-exclusive embodiment, the Applicant has elected to use a time domain low-pass filter cutoff of approximately 1.41 Hz for filter52and a complementary high-pass filter with a cutoff of the same frequency for filter62. The particular cutoff of 1.41 Hz used herein is merely exemplary and should not be construed as limiting in any regard. In different time-domain embodiments, cut off times that are either higher or lower than 1.41 Hz may be used. The filtering may alternatively be done in the distance domain as well. For example, a cutoff of 30 feet may be used (assuming a vehicle speed of approximately 30 mph), which is roughly equivalent to the 1.41 Hz in the time domain. Again, when filtering in the distance domain, more or less than 30 feet may be used. It is further noted that a number of factors may be considered when selecting a particular cut off frequency in either the time or distance domains. Such factors may include the data sampling characteristics, the frequency response, and the waveband accuracies of various sensors, etc. In yet another alternative embodiment, a Kalman Filter or other complementary filters may be used with the inertial profile60and running slope profile50to achieve similar results of utilizing the shorter wavelengths of the inertial profile data and the longer wavelengths of the running slope profile data. Projected Additional Track Under certain circumstances, using two height sensors for any second or additional track may not be feasible, desirable, or economical. In which case, a “projected” second or additional track may be implemented by using only a primary height sensor and accelerometer for the second and/or additional track and vehicle roll data in place of any secondary height sensor normally required for a running slope profile. For a second projected track for example, a second primary height sensor and the vehicle roll data are used for determining a cross slope from the main track to the additional track location (e.g., from left to right or vice-versa) of the host vehicle12. From the cross slope, a “projected” second running slope profile may be accurately estimated from the main track running slope profile. Once the projected running slope profile is defined, the additional projected zero-speed profile can be generated in a manner similar to that described above. If additional projected track(s) are desired, additional projected running slope profile(s) are generated in a similar manner using only a primary height sensor and accelerometer for each additional track and vehicle roll data in place of any secondary height sensor respectively. Referring toFIG.3A, a block diagram of a profiler system10that relies on a projected running slope for the projected additional track is illustrated. In this embodiment, the profiler includes, from the first or main track, the primary and secondary height sensors14,16, DMI20, INS26that includes GNSS24and IMU38, and the running slope calculation unit32. Again, in an alternative embodiment, an inclinometer or any tilt sensor can be used in place of the INS26for generating the vehicle pitch and roll data. The profiler system10further includes an additional track vertical accelerometer80, an additional track height sensor82, vehicle roll data84generated by the INS26, an additional track inertial profile calculation unit86, and a data blending element88. The additional vertical accelerometer80and the additional track height sensor82are typically arranged longitudinally along the additional track of the host vehicle12, opposite and parallel to the first or primary track. The running slope calculation unit32generates a running slope profile of the surface traveled by the host vehicle12from the height sensors14,16, and the vehicle pitch data39as calculated by the INS26as previously described. The additional track inertial profile calculation unit86generates an inertial profile for the additional track from the additional track vertical accelerometer80and the additional track height sensor82, similar to the inertial profile calculation unit34as already described. The data blending element88, as described in more detail below with regard toFIG.3B, blends the two profiles together, along with vehicle roll data84, to generate a projected additional track zero-speed profile90. Referring toFIG.3B, a logical block diagram of the data blending element88is illustrated. For the main track, the data blending element88includes a filter52for eliminating short wavelength components from the main track running slope data50, resulting in filtered running slope data54with only long wavelength components. The running slope data54is then re-sampled to the distance domain. An integrator56integrates the filtered running slope data54based on the distance sample interval, resulting in a filtered running slope profile58with only long wavelength components. For the additional track inertial data, the data blending element88includes a filter62that removes the long wavelength components from the additional track inertial profile60. As a result, a filtered inertial profile64with only short wavelength components is generated. A cross slope92of the road surface is calculated from the main track primary height sensor14, the additional track height sensor82and the vehicle roll data84. With the two sensors14,82located a known distance apart transversely on the vehicle, and the amount of roll of the host vehicle12as indicated by the vehicle roll data84, the cross slope92of the roadway can be accurately measured. The cross slope92is then filtered by filter94, removing the short wavelength components, resulting in a filtered cross slope96with only long wavelength components. Each of the filters52,62and94can operate in either the time domain or the distance domain. Also, the cut-off for each of the filters52,62and94can be the same or different. In a non-exclusive embodiment, a cut off of 1.41 Hz is used for each of the filters52,62and94. Again, other cut offs, in either the time or distance domains, may be used, such as but not limited to 30 feet (at vehicle speeds of 30 mph). Furthermore, Kalman filters and/or other complementary filters may be used as well. A projected filtered running slope profile98is generated by projecting the filtered running slope profile58onto to the alternate track using the filtered cross slope96and transverse distance between the first or main track and the alternate track. In other words, the projected filtered running slope profile is derived by modifying the main track filtered running slope profile by the degree of cross-slope on the surface between the main track and alternate track. In an alternative embodiment, un-filtered running slope data can be projected onto the additional track using un-filtered cross-slope data. The resulting projected running slope data can then be filtered to achieve a similar result. These are just two examples of many different ways a projected running slope profile can be generated. The data blending element100generates the additional “projected” track zero-speed profile90by adding the projected filtered running slope profile98with the additional tracks filtered inertial profile64. The result is an accurate projected additional track zero-speed profile90with little to no drift. In one non-exclusive embodiment, the aforementioned process is repeated as data samples are collected. With each set of collected data samples, the projected track zero-speed profile90is updated as the host vehicle12travels over the road surface. In an alternative embodiment, the data can be collected during a profile run, including during accelerations, decelerations, stops and when the host vehicle is traveling at a very slow speed (i.e., below threshold speeds typically required for prior art profilers) and without lead-in or lead-out distances. The collected data is then saved. At a later time, the data is processed as described above, resulting in the generation of one or more zero-speed profile(s)40and/or second and/or additional projected additional track zero-speed profile(s)90. Referring toFIG.4, a flow diagram110is illustrated for generating a road surface profile without any speed requirement, including during stops, while the profiler system10accelerates or decelerates, or while moving at very slow speeds, including below thresholds commonly required for prior art profilers (e.g., 5, 10 or 15 mph), and without any lead-in or lead-out distances. In the initial step112, it is determined if the profiler is in data collection (i.e., operational) mode or not. In step114, data is collected from the various onboard sensors regardless of the host vehicle12being stationary or moving, and if the latter, regardless of the speed. With single-track implementations, or multiple single-track implementations, the sensors include the primary and secondary height sensors14,16, the vertical accelerometer18, the vehicle pitch data39generated by the INS26or an inclinometer or tilt sensor. With a projected track implementation, data would also be collected from additional sensors including any additional track vertical accelerometer80, any additional track height sensor82and the vehicle roll data84. In step116, the running slope profile is updated with the newly collected data samples. The inertial profile is updated as well in step118. Both are updated as previously described with respect to either ofFIG.2A or3A, depending on if the running slope is the main or projected track. Control is then returned to decision step112and the above steps are repeated so long as the profiler system10is in the data collection mode. In step120, running slope profile and the inertial profile are blended together as described above with regard toFIG.2B(single track) or3B (projected track). As a result, the zero-speed profile40/90is generated. As a general rule, step120is performed in a post-processing step, meaning after all the data for a profiler run has been collected, processed and saved. In an alternative embodiment, however, the blending step120can also be done “on-the-fly”, resulting in the generation of zero speed profiles in near real time. Drift vs. No Drift Referring toFIG.5, a plot comparing data drift with a conventional inertial profiler and a zero-speed profile without drift is illustrated. The plot includes distance in feet along the horizontal axis and elevation in inches along the vertical axis. The upper and lower profiles show drift. When the distance and the relative elevation of the host vehicle12are measured, the vertical acceleration data samples are collected in units of seconds per inches square. By double-integrating, the data units are converted to just inches. The double-integration, however, causes the data to drift during accelerations and decelerations, which is graphically illustrated in the diagram. The profiles in the middle ofFIG.5are from the zero-speed system that is the subject of the present invention. The zero-speed profiles, on the other hand, show little to no drift. By combining the long wavelength running slope profile and short wavelength inertial profile, most or all of the drift resulting from the double-integration process is mitigated or removed. As a result, zero-speed profiles can be generated, including during a stop, during acceleration or deceleration of the host vehicle or at very slow speeds, such as below minimum thresholds typically required by prior art profilers, and without any lead-in or lead-out distances. Data Processing Embodiments In additional embodiments, the data processing as described above with regard toFIGS.2A and2Band/orFIGS.3A and3Bfor generating the zero-speed profiles40/90can be performed by the main control unit22, the computer28, or some combination thereof. In some embodiments, the main control unit22can be implemented in hardware, software, or a combination thereof. In yet other embodiments, the main control unit22is a dedicated data processing unit or a general data processing unit, such as a computer. In latter embodiments, the general data processing unit can be same computer28or another computer and can be located on or near the host vehicle12or at a remote location, such as a home office. Embodiments Using 3D Surface Elevations to Generate Surface Elevation Profiles The Applicant has developed yet additional embodiments to create long wavelength profiles that can be combined with the inertial profile data. With these embodiments, INS data, including data from the GNSS24and/or IMU38, and 3D surface elevations collected by a lidar or similar sensor202, are processed to generate more accurate road surface elevations and vehicle dynamics information. This information is then selectively combined with inertial and/or height sensor information to generate zero-speed profiles of the road surface. In the various embodiments described in more detail below, the data processing of the INS, GNSS data and/or 3D surface elevations is performed in the time domain. In the particular embodiments described below, the low-pass filter cutoff that is used is approximately 1.41 Hz. Again, this cut off time is merely exemplary and cut off times that are less than or more than 1.41 Hz may be used. It should be understood that the processing as described herein can also be implemented in the distance domain as well. Correspondingly, a wide range of distances may be used when processing in the distance domain. Referring toFIG.6, a profiler system200mounted on a host vehicle12is illustrated. The profiler system200of this embodiment includes a primary height sensor14, a vertical accelerometer18, a Distance Measurement Instrument (DMI)20, a lidar sensor202, a main control unit22, a Global Navigation Satellite System (GNSS) receiver24, an Inertial Navigation System (INS)26and a computer28, such as a laptop, tablet computer, smart phone or other computing device, typically (although not necessarily) located in the cabin of the host vehicle12or at another location, such as an office. As each of the above elements was previously described, a detailed explanation is not provided herein for the sake of brevity. The profiler system200ofFIG.6differs from the same ofFIG.1in two regards. First, the secondary height sensor16is removed. The profiler system200, therefore, does not rely on pitch information generated by the secondary height sensor16as previously described. Second, a lidar or similar sensor202is provided on the host vehicle12. The lidar sensor202is arranged to generate 3D surface elevations of the road surface traveled just ahead of the vehicle as the vehicle10is traveling along a road surface. In an alternative embodiment, the lidar sensor202can be pointing behind the vehicle and is responsible for generating 3D surface elevations of the road surface behind the vehicle as the vehicle drives over the road surface. As is well known, the lidar sensor202scans the road surface, either in a rotating circle or in repeating scans in a direction orthogonal to the direction of travel of the vehicle. In response, the lidar unit202generates the 3D surface elevations of the road surface travelled by the vehicle10as is known in the art. Alternatively, other sensors [e.g. 3D laser profile sensor, a pairing of camera(s) and laser instrument(s) for surface imaging or measurement, stereo cameras, a LCMS (Laser Crack Measurement System, Pavemetrics Systems, Inc. Québec (Québec) Canada), etc.] capable of scanning a roadway from the vehicle to obtain 3D surface elevations can be used in place of the lidar sensor202 Referring toFIG.7, a flow diagram210implemented by the main control unit22and/or the computer28, either alone or in cooperation with other computers, for generating a zero-speed profile by the profiler system200is illustrated. In this diagram, the steps involving the primary height sensor14, vertical accelerometer18, DMI20, GNSS24, INS26, running slope calculation unit32, inertial profile calculation unit34, data blending element36are essentially all the same as previously described. As such, a detailed explanation of these elements is not provided again for the sake of brevity. The 3D surface elevations212is generated by combining data from the lidar sensor202and data from the INS26, including data from the GNSS24and/or the IMU38. The IMU38data is indicative of all the dynamics of the vehicle, including motion, acceleration, and rotational rate, in each of the X, Y and Z directions. When combined with data from the GNSS24, the net result is a vehicle elevation profile that is more accurate, particularly at longer wavelengths, for several reasons. First, the data from the INS26utilized data from the GNSS24that helps to maintain good long trend elevations once combined with the IMU38. Second, the data from the INS26can be used to compensate for abrupt vehicle dynamics, such as sharp accelerations or decelerations, hard stops, sharp turns, etc., all of which can cause significant anomalies in the measurements sensed by the vertical accelerometer18used to generate the inertial profile. The DMI20can also be used to help aid the INS26by incorporating that data into the Kalman Filtering method used by the INS sensor or post-processing. The 3D surface elevations212are utilized in place of the pitch profile as described above with respect toFIG.2A. In other words, a surface elevation profile216is extracted from the 3D surface elevations212by extrapolating along a line or a track of the vehicle, such as along where ever the primary height sensor14and vertical accelerometer18are located transversely on the vehicle. The running slope calculation unit32is arranged to receive the surface elevation profile216and input from the DMI20. The running slope calculation unit32in response generates running slope data of the surface traveled by the host vehicle12by:1. Extracting a longitudinal surface elevation profile216at a selected transverse location on the vehicle from the 3D surface elevations212in order to obtain the surface elevation profile216at any primary height sensor14location that is desired. The DMI20is arranged to measure the incremental longitudinal distance of the host vehicle12while traveling over the road surface.2. The extracted surface elevation profile216is then differentiated using a predetermined base length to obtain a distance-based running slope profile. The differentiation converts the longitudinal surface elevation profile216into a running slope profile. In one embodiment, the base length is one foot or 12 inches. Again, this base length is just an example and smaller or larger base lengths may be used.3. The running slope profile is then re-sampled from distance-based to time-based to create the running slope data50which is then used for Data Blending as shown inFIG.2B. The inertial profile calculation unit34receives inputs from the first height sensor14, the vertical accelerometer18and the DMI20. The inertial profile calculation unit34double-integrates the vertical accelerometer sample data18on a time basis to get the time-based relative vehicle elevation profile. The data from the primary height sensor14is then added to the time-based relative vehicle elevation profile to obtain a time-based inertial elevation profile of the road surface traveled by the host vehicle12. The data blending element36is responsible for combining (a) the running slope data as generated by the running slope calculation unit32and (b) the inertial profile as generated by the inertial profile calculation unit34. The running slope profile is generally less capable of measurements at shorter wavelengths and is, therefore, more accurate on longer wavelengths without profile drift. On the other hand, the inertial surface elevation profile tends to be more accurate at the shorter wavelengths, but tends to drift over longer wavelengths. The data blending element36therefore:(1) Filters out inaccurate short wavelength components from the running slope data by applying a filter to obtain long wavelength running slope data;(2) Re-samples the running slope data to the distance domain;(3) Integrates the running slope distance-based data to obtain a distance-based running slope profile;(4) Filters the inertial profile to remove long wavelength;(5) Re-samples the inertial profile data to the distance domain; and(6) Adds the long wavelength running slope profile to the short wavelength inertial profile. The net result of the data blending is the generation of an accurate “zero-speed” profile40of the road surface, regardless of the speed of the host vehicle. In other words, an accurate surface profile can be generated both (a) when there are vehicle stoppages, accelerations, and decelerations and (b) at very low speeds below a minimum speed, such as 5, 10 or 15 mph, as commonly required with prior art profilers, and without any lead-in or lead-out distances. Additional Tracks The above describe process can be used for one or more additional projected tracks. With the 3D surface elevations212, one or more additional tracks can be calculated using the same methodology as described above. The only difference being that with each track, the corresponding longitudinal line along the roadway is extracted from the 3D surface elevations212at different traverse locations with respect to the vehicle, each correlating to a primary height sensor used for each additional track respectively. INS Data Embodiment without Data Blending Referring toFIG.8A, another profiler system300including components mounted on a host vehicle12is illustrated. The profiler system300of this embodiment includes a primary height sensor14, a Distance Measurement Instrument (DMI)20, a main control unit22, a Global Navigation Satellite System (GNSS) receiver24, an Inertial Navigation System (INS)26and a computer28, such as a laptop, tablet computer, smart phone or other computing device, typically (although not necessarily) located in the cabin of the host vehicle12or at another location, such as an office. As each of the above elements was previously described, a detailed explanation is not provided herein for the sake of brevity. The profiler300differs from the same ofFIG.1in that the secondary height sensor16and the vertical accelerometer18are removed. The profiler300, therefore, does not rely on pitch information generated by the secondary height sensor16nor the vehicle profile generated by the vertical accelerometer18. Instead, the profiler300relies on height readings from the primary height sensor14, along with vehicle elevation profile314generated by the INS26. Referring toFIG.8B, a flow diagram310for generating a zero-speed profile is illustrated. These data processing steps can be implemented by the main control unit22and/or the computer28alone or in cooperation with other computers. In this embodiment, a vehicle elevation profile314is generated from processing of the data generated by the INS26, including data from one or more of the DMI20, GNSS24, and/or IMU38. In a non-exclusive embodiment, Kalman filtering is used to combine the data from the GNSS24and the IMU38and optionally data from the DMI20, resulting in the vehicle elevation profile314. In various embodiments, the INS26includes vehicle position information such as vehicle relative position, absolute position, acceleration, and velocity in each of the X, Y and Z directions (i.e., 9 degrees of freedom) as well other possible vehicle movements such as pitch, roll, and yaw respectively. Other forms of data from the INS26such as the vertical vehicle velocity or vertical vehicle acceleration can be used instead of absolute vehicle elevation by integrating or double-integrating respectively to get the vehicle elevation profile. This data from the INS26with or without DMI20can be processed with a Kalman Filter (or other commonly used similar filtering methods for combining GNSS and IMU data) in real-time with or without real-time Differential Global Positioning System or Differential Global Navigation Satellite Systems DGPS or DGNSS corrections from a base station, network of base stations, Satellite Based Augmentation System (SBAS) and/or Wide Area Augmentation System (WAAS) corrections, or any other method of receiving GNSS corrections. DGPS/DGNSS are methods of correcting GNSS receiver data to be more accurate to real-world positions than without using corrections. Basically, when DGPS/DGNSS are used, the mobile GNSS receiver receives the corrections data via cell modem, radio, internet, or satellite signal. Alternatively, the data from the INS26with or without DMI20can be post-processed using similar methods as used in real-time but with higher degrees of accuracy due to processing and filtering the data in both forwards and reverse directions and having the capabilities of utilizing more than one corrections service. Like with real-time processing, the post-processing can be done with or without DGPS/DGNSS corrections from a base station, network of base stations, SBAS/WAAS corrections, or any other method of receiving GNSS corrections. Using data from the INS26, a vehicle elevation profile314is extracted on a time basis. A commercially available software package that is capable of providing an all in one solution for performing the above-described post-processing of the INS and/or DMI data includes Waypoint Inertial Explorer, Novatel, Inc. Calgary, Alberta, Canada. The data samples from the primary height sensor14and the vehicle elevation profile314can then be re-sampled on a distance basis at the same sampling interval. In a data processing unit320, the data samples from the primary height sensor14and the vehicle elevation profile314are added together on a distance basis to derive a longitudinal, zero-speed profile40of the road surface. The advantage of the no blending embodiment is that it is simple to implement. The drawback, however, is that with currently available INS sensors, the sample frequency of the processed INS data may not be able to properly capture a vehicle's suspension frequency. To the extent such commercially available software package provided more sampling frequency options, or is customized to provide more sampling frequency options, the wider the applicability of this embodiment becomes. Additional Tracks The above described no blending process can be used for one or more additional projected tracks. Alternatively, one could use the same INS26sensor data of the primary zero-speed profile for any additional track by translating the vehicle elevation profile, using the X, Y, and Z offset of the INS and INS attitude, to any additional track's height sensor14. INS Data Embodiment with Data Blending With this embodiment, INS data is blended with inertial profile data so that bandwidth limitations associated with the INS data are largely circumvented. As a result, higher frequency components of the inertial profile data can be blended with the INS data. Referring toFIG.9, another profiler system400including components mounted on a host vehicle12is illustrated. The profiler system400of this embodiment includes a primary height sensor14, a vertical accelerometer18, a Distance Measurement Instrument (DMI)20, a main control unit22, a Global Navigation Satellite System (GNSS) receiver24, an Inertial Navigation System (INS)26and a computer28, such as a laptop, tablet computer, smart phone or other computing device, typically (although not necessarily) located in the cabin of the host vehicle12or at another location, such as an office. As each of the above elements was previously described, a detailed explanation is not provided herein for the sake of brevity. The profiler system400differs from the same ofFIG.1in that the secondary height sensor16is removed. The profiler system400, therefore, does not rely on pitch information. Instead, the profiler400relies on height readings from the primary height sensor14, along with vehicle elevation profile314derived from processing of the data generated by the INS26in a manner similar to that described above. Referring toFIG.10, a flow diagram410for generating a zero-speed profile of a road surface traveled by the host vehicle12by blending inertial profile data and INS data is shown. On the right side of the diagram, the inertial profile calculation unit34generates a time-based inertial elevation profile of the road surface traveled by the host vehicle12, as discussed above, by receiving the inputs from the first height sensor14, the vertical accelerometer18and the DMI20. The inertial profile calculation unit34then double-integrates the vertical accelerometer sample data18on a time basis to get the time-based relative vehicle elevation profile. The data from the primary height sensor14is then added to the time-based relative vehicle elevation profile to obtain the time-based inertial profile. On the left side of the diagram, the vehicle elevation profile314is generated as described above from either real-time processing or post-processing of the data from the INS26optionally with the DMI. Other forms of data from the INS26such as the vertical vehicle velocity or vertical vehicle acceleration can be used instead of absolute vehicle elevation by integrating or double-integrating respectively to get the vehicle elevation profile. The running slope calculation unit32receives the vehicle elevation profile314, height data from the primary height sensor14and the incremental longitudinal distance traveled by the host vehicle12as measured by the DMI20. In response, the running slope calculation unit32generates an elevation profile of the surface traveled by the host vehicle12by:1. Adding the height measurement data from the primary height sensor14to the vehicle elevation profile314obtained from the INS26to get a time-based surface elevation profile;2. The surface elevation profile is then re-sampled to a distance-based profile and differentiated using a predetermined base length to obtain a distance-based slope profile. Again, the predetermined base length in one embodiment is 12 inches. In other embodiments, a longer or shorter base length may be used.4. The running slope profile is then re-sampled from distance-based to time-based to create the time-based running slope data. The inertial profile calculation unit34receives inputs from the first height sensor14, the vertical accelerometer18and the DMI20. The inertial profile calculation unit34double-integrates the vertical accelerometer sample data18on a time basis to get the time-based relative vehicle elevation profile. The data from the primary height sensor14is then added to the time-based relative vehicle elevation profile to obtain a time-based inertial elevation profile of the road surface traveled by the host vehicle12. The data blending element36is responsible for combining (a) the running slope data as generated by the running slope calculation unit32and (b) the inertial profile as generated by the inertial profile calculation unit34. The running slope profile is generally less capable of measurements at shorter wavelengths. The distance between the two height sensors14,16limits the capability of the running slope elevation profile to accurately measure any wavelength less than the distance (e.g., one foot for the embodiment described above). The running slope profile is, therefore, more accurate on longer wavelengths without profile drift. On the other hand, the inertial surface elevation profile tends to be more accurate at the shorter wavelengths, but tends to drift over longer wavelengths. The data blending element36therefore:(1) Filters out inaccurate short wavelength components from the running slope data by applying a filter to obtain long wavelength running slope data;(2) Re-samples the running slope data to the distance domain;(3) Integrates the running slope distance-based data to obtain a distance-based running slope profile;(4) Filters the inertial profile to remove long wavelength;(5) Re-samples the inertial profile data to the distance domain; and(6) Adds the long wavelength running slope profile to the short wavelength inertial profile. The net result of the data blending is the generation of an accurate “zero-speed” profile40of the surface, regardless of the speed of the host vehicle. In other words, an accurate surface profile can be generated both (a) when there are vehicle stoppages, accelerations, and decelerations and (b) at very low speeds below a minimum speed, such as 5, 10 or 15 mph, as commonly required with prior art profilers, and (c) without any lead-in or lead-out distances. In an alternate embodiment, the post processing vehicle elevation profile314can be combined with the primary height sensor14and then filtered on a time basis to remove the short wavelength components instead of converting to running slope profile. This can then be combined with the short wavelength inertial profile on a time basis and re-sampled to the distance domain to create the zero-speed profile. In yet another alternate blending method, the vehicle elevation profile can be filtered by itself in the time domain to remove the short wavelength components. This long wavelength vehicle elevation profile can then be blended with the short wavelength components of the relative vehicle profile generated from the vertical accelerometer18to obtain a blended vehicle elevation profile. When adding on the primary height sensor data14to the resulting blended vehicle elevation profile and re-sampled to a distance basis, the result is an alternate method of obtaining the zero-speed profile. Other forms of data from the INS26such as the vertical vehicle velocity or vertical vehicle acceleration can be used instead of absolute vehicle elevation and combined with the vertical accelerometer18either in the acceleration domain or velocity domain respectively. Projected Additional Track Referring toFIG.11, a block diagram430illustrating the addition of a projected track for blending INS data and an additional track in a manner similar toFIG.10is shown. With this embodiment, the left side ofFIG.11is essentially the same asFIG.10. On the right side, the following elements are provided, including an additional track vertical accelerometer80, an additional track height sensor82, and vehicle roll data84generated by the INS26, an additional track inertial profile calculation unit86, and a data blending element88. The additional vertical accelerometer80and the additional track height sensor82are typically arranged longitudinally along the additional track of the host vehicle12, opposite and parallel to the first or primary track. The additional track inertial profile calculation unit86generates an inertial profile for the additional track from the additional track vertical accelerometer80and the additional track height sensor82, similar to the inertial profile calculation unit34as already described. The data blending element88, as described in more detail below with regard toFIG.3B, blends the running slope data and inertial profile together, along with vehicle roll data84, to generate a projected additional track zero-speed profile90. As these elements were all previously described, a detailed explanation is not repeated herein for brevity. Alternatively, the INS vehicle positions data can be translated using vehicle roll along with known x, y, and z offsets from the INS to the additional track's primary sensor to obtain the vehicle elevation profile at the additional track's location. This additional track's vehicle elevation profile can then be combined with the additional track's height sensor to be used for the running slope data of32inFIG.10where the additional track can essentially act as the primary track inFIG.10and the INS26location is merely translated to any additional track's location using x, y, and z offsets and attitude measurements of the INS26. In which case the data blending can once again be done either using the running slope data or vehicle elevation data when combining with the inertial profile as stated previously. Other Embodiments Conventional profilers include so called reference profilers and inertial profilers. Reference profilers are devices typically used to collect reference longitudinal profiles for evaluating the accuracy of inertial profilers used for pavement construction quality control and quality assurance applications, as well as inertial profilers used for network surveys or pavement management applications. Reference profilers are typically “stand-alone” devices that have their own rigid frame and wheels, onto which various height, inclination and/or distance measuring devices are configured in a specific manner to support accurate longitudinal profile measurement. Reference profiler devices are typically manually propelled or can be motorized for operation at very low speeds. Inertial profilers are typically devices that are attached to a host vehicle, such as a pickup truck, passenger car, golf cart or utility vehicle. Inertial profilers rely on one or more pairings of a primary height sensor and an accelerometer, in close proximity to each other, and a Distance Measurement Instrument (DMI), all of which are mounted onto the host vehicle. The various embodiments described herein involve the attachment of instruments, such as the primary height sensor14, the secondary height sensor16, the vertical accelerometer18, the Distance Measurement Instrument (DMI)20, the main control unit22, the Global Navigation Satellite System (GNSS) receiver24, the Inertial Navigation System (INS)26, and/or an Inertial Measurement Unit (IMU) onto a host vehicle, such as a pickup truck, heavy construction vehicle, passenger car, a golf or similar self-propelled cart, etc. This approach differs from conventional reference profilers. Unlike reference profilers, in various embodiments of the present invention, the various measurement devices mentioned herein do not depend on a rigid frame, any particular wheel spacing or alignment, or any particular positioning of the measuring devices relative to the vehicle's frame or wheels. The measurement devices of the present invention can be mounted onto various exterior or interior components of a host vehicle, such as body panels, bumper assemblies, floor surfaces and/or frame, or the wheels of the host vehicle. In fact, at least several of the measurement devices are typically attached in a manner that is suspended from the frame of the host vehicle. It should be understood, however, that the various embodiments described herein are not precluded from being used with conventional profilers and/or the various instruments as listed herein directly mounted onto the rigid frame of a vehicle. On the contrary, the some or all of the instruments as described herein can be implemented or otherwise mounted on any rigid frame or any part of a vehicle of any kind. Conventional inertial profilers are typically not equipped to measure a spectrum of vehicle dynamics (such as pitch, roll, tilt and yaw, etc.); nor are conventional inertial profilers capable of compensating for a spectrum of vehicle dynamics, beyond vertical acceleration, when generating road surface profiles. As a result, conventional inertial profilers have well known limitations, including a minimum operating speed threshold, minimum acceleration and deceleration rates of the host vehicle, and required lead-in and lead-out distances. In contrast, the various embodiments of the present invention add various measurement devices, including a secondary height sensor, a Global Navigation Satellite System (GNSS) receiver24, an Inertial Navigation System (INS) and/or an Inertial Measurement Unit (IMU). With the present invention, INS, GNSS and/or IMU data is among the instrumentation used to measure and compensate for vehicle dynamics when generating road surface profiles. The additional measuring devices and methodology of the present invention generate accurate and repeatable road surface profiles, with no vehicle speed threshold and without lead-in or lead-out distances. It is further noted that although the host vehicle12is depicted as a pick-up truck, again by no means is this a requirement. On the contrary, the host vehicle12can be any type of vehicle that is either motorized or non-motorized. For example, the host vehicle can be a common passenger car, a heavy piece of construction equipment, a cart such as a golf cart, or even a non-motorized vehicle such as a frame that is either pushed and/or pulled by a human operator or another vehicle. As such, the term vehicle as used herein should be widely construed as any device or apparatus capable of rolling or otherwise moving across a surface and supporting the various components and instruments of the profiler system10as described herein. In yet other embodiments, the sampling interval for collecting data samples is once every millisecond. It should be understood that this sampling interval is merely exemplary and a wide range of sampling intervals may be used, including more or less frequent than once every millisecond. Data can also be collected at a distance-based sample interval, such as every inch, rather than a time-based sample interval. Although only a few embodiments have been described in detail, it should be appreciated that the present application may be implemented in many other forms without departing from the spirit or scope of the disclosure provided herein. Therefore, the present embodiments should be considered illustrative and not restrictive and is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. | 53,822 |
11858517 | DETAILED DESCRIPTION Embodiments of the present systems and methods may provide techniques that provide dynamic groups and attribute-based access control (ABAC) model (referred as CV-ABACG) to secure communication, data exchange and resource access in smart vehicles ecosystem. This model takes into account the user-centric privacy preferences along with system-defined policies to make access decisions. Embodiments of the present systems and methods may provide groups in context of smart cars, which are dynamically assigned to moving entities like vehicles, based on their current GPS coordinates, direction, or other attributes, to ensure relevance of location and time sensitive notification services offered to drivers, provide administrative benefits to manage large numbers of entities and enable attributes inheritance for fine-grained authorization. Internet of Things (IoT) has become a dominant technology which has proliferated to different application domains including health-care, homes, industry, power-grid, to make lives smarter. It is predicted that the global IoT market will grow to $457 Billion by year 2020, attaining a compound annual growth rate of 28.5%. Automation is leading the world today, and with ‘things’ around sensing and acting on their own or with a remote user command, has given humans to have anything accessible with a finger touch. Data generated by these smart devices unleash countless business opportunities and offer customer targeted services. IoT along with ‘infinite’ capabilities of cloud computing are ideally matched with desirable synergy in current technology-oriented world, which has been often termed as cloud-enabled, cloud-centric, or cloud-assisted IoT in literature. IoT is embraced by every industry with automobile manufacturers and transportation among the most aggressive. Vehicular IoT inherits intrinsic IoT characteristics but dynamic pairing, mobility of vehicles, real-time, location sensitivity are some features which separates it from common IoT applications. The vision of smart city incorporates intelligent transportation where connected vehicles can ‘talk’ to each other (V2V) and exchange information to ensure driver safety and offer location-based services. These intelligent vehicles can also interact with smart roadside infrastructure (V2I), with pedestrian on road (V2H) or send data to the cloud for processing. Basic safety messages (BSMs) are exchanged among entities using commonly used Wi-Fi like secure and reliable Dedicated Short Range Communication (DSRC) protocol. Vehicles can receive speed limit notification and flash flood alerts on car dashboard or via seat vibration. A car will receive information about nearby parking garages, restaurant offers or remote engine monitoring by authorized mechanic with nearby repair facility and discounts updating automatically. These services will provide pleasant travel experience to drivers and unleash business potential in this intelligent transportation domain. Smart internet connected vehicles embed software having more than 100 million lines of code to control critical systems and functionality, with plethora of sensors and electronic control units (ECUs) on board generating huge amounts of data so these vehicles are often termed as ‘datacenter on wheels’. As vehicles get exposed to external environment and internet, they become vulnerable to cyberattacks. Common security vulnerabilities including buffer overflow, malware, privilege escalation, and trojans etc. can be exploited in connected vehicles. Other potential threats include untrustworthy or fake messages from smart objects, malicious software injection, data privacy, ECU hacking and control, and spoofing connected vehicle sensor. With broad attack surface exposed via air-bag ECU, On-Board Diagnostics (OBD) port, USB, Bluetooth, remote key, and tire-pressure monitoring system etc. these attacks have become much easier to orchestrate. In-vehicle Controller Area Network (CAN) bus also needs security to protect message exchange among ECUs. Further, communication with external networks including cellular, Wi-Fi and insecure public networks of gas stations, toll roads, service garages, or after-market dongles are a big threat to connected vehicles security. Cyber incidents including Jeep and Tesla Model X hacks where engine was stopped and steering remotely controlled demonstrate security vulnerabilities. Smart car incidents have serious implications as they can even result in loss of human life. Access control mechanisms are widely used to restrict unauthorized access to resources and secure communication among entities. Attribute-based access control (ABAC) may provide finer granularity and offers flexibility in distributed multi-entity communication scenarios, which considers characteristics of participating entities along with system and environment properties to determine access decision. Smart cars ecosystem involves dynamic interaction and message exchange among connected objects, which must be authorized. It is necessary that only legitimate entities are allowed to control on-board sensors, data messages and receive notifications. Further, user-centric privacy requires that users can control what alerts they want to receive, what advertisements they are interested or who can access their car's sensors, etc. Embodiments of the present systems and methods may provide access control in connected smart cars and proposes an attribute-based access control model for connected vehicles ecosystem, referred as CV-ABACG. Embodiments may utilize the attributes of moving entities, such as current location, speed etc., to dynamically assign them to various groups (for example, predefined by smart city administration), for implementing attributes-based security policies, and also may incorporate user-specific privacy preferences for ensuring relevance of notifications service in constantly changing and mobile smart cars ecosystem. Examples may include a use case of the model as an external authorization engine hooked into the widely used Amazon Web Services (AWS) platform. Vehicular IoT and smart cars involve dynamic communications and data exchange which requires access controls to restrict within authorized entities. Extended ACO Architecture. Embodiments of the present systems and methods may provide an extended IoT architecture for specific vehicular IoT and connected vehicles domain. This extended access control architecture (E-ACO), shown inFIG.1, may include clustered objects (like smart cars and traffic lights) which are objects with multiple individual sensors. Also, these clustered objects may have applications (for example, lane departure or safety warning system in cars) installed on board, which is usually not the case in general IoT realm. As shown inFIG.1, four layered E-ACO may have an Object Layer at the bottom that represents physical clustered objects and sensors along with applications installed on them. In-vehicle communication at this layer may be mainly supported by Ethernet and CAN technologies, whereas communication across clustered objects is done using DSRC (used for BSM exchange in V2V communication), Wi-Fi, or LTE etc. It should be noted that each layer in E-ACO architecture interacts within itself and with entities in adjacent layers. Therefore, the object layer may interact with users at the bottom and virtual object layer above it. The Virtual Object Layer acts as an intermediate between cloud services and physical layer, which offers the necessary abstraction by creating cyber entities for physical objects in object layer. In particular in connected vehicles domain, where cars are moving across different terrains where internet connectivity can be an issue, cyber entities may maintain the state of the corresponding physical object as best known and to be updated when connectivity is restored. When two sensors s1and s2across different vehicles interact with each other, the order of communication using virtual objects will follow s1to vs1(virtual entity of s1), vs1to vs2and vs2to physical sensor s2. Cloud Services and Application Layer: As applications may use cloud services, therefore these two layers are discussed together. On-board sensors may generate data which is stored and processed by cloud services, which is used by applications to offer services to end-users. Cyber-entities of physical objects may be created in a cloud layer that provides a persistent state information of objects. The central cloud may incur latency and bandwidth issues in time-sensitive applications, which can be resolved by introducing an edge or fog computing infrastructure. Authorization Requirements in Smart Cars. Smart cars may expose the conventionally isolated car systems to the external environment via the Internet. The dynamic and short-lived real time vehicle-to-vehicle (V2V) and vehicle-to-Internet (V2I) interactions with entities in and around a connected vehicle may ensure message confidentiality and integrity, and protection of on-board resources from adversaries. Multi-Layer and User Privacy Preferences. Broad attack surfaces of connected vehicles may be the first entry point to in-vehicle critical systems. Two-level access control policies may be advantageous to protect the external interface and internal Electronic Control Unit (ECU) communications. Access control for the external environment may protect on-board sensors, applications, and user personal data from unauthorized access by entities including vehicles, applications, masquerading remote mechanics, or other adversaries. Over-the air firmware updates may be checked and may be allowed only from authorized sources. An attacker, even if successful in passing through the first check point, may be restricted at the in-vehicle level, which secures overwrite and control of critical units (engine, brakes, telematics etc.) from adversaries. Vehicles exchange Basic Safety Messages (BSMs), which raises an important question about trust. Information received should be correct and from a trusted party, before being used by on-vehicle applications. Applications may access sensors within and outside the car, which should be authorized. For example, a lane departure warning system accessing tire sensors may be checked to prevent a spoofed application reading vehicle movements. A passenger accessing infotainment (information and entertainment) systems of the car via Bluetooth or using smartphone inside car may also be authorized. Smart cars location-based services enable notifications and alerts to vehicles. A user must be allowed to set his personal preferences whether he wants to receive advertisements or filter out which ones are acceptable. For instance, a user may not want to receive restaurant notifications but is interested in flash-flood warnings. System wide policy, like a speed warning to all over-speeding vehicles or a policy of who can control speed of autonomous car are needed. Data protection in the cloud may be advantageous due to frequent occurrence of data breaches. Big Data access control may be use when user privacy is to be ensured and unauthorized disclosure is not allowed. Cross cloud trust models may be needed to allow data access when a mechanic application in a private cloud reads data in the car-manufacturer cloud. Physical tampering of vehicle on-board diagnostics (OBD) and sensors may also require protection. Relevance of Groups. Many smart car applications and service requests from drivers are location specific and time sensitive. For example, a driver may want to get warning signals when traveling near a blind spot, in a school zone, or of pedestrians crossing the road. Further, notifications sent to drivers may be short-lived and mostly pertinent around current GPS coordinates. A gas discount notification from a nearby station, an accident warning two blocks away, or ice on the bridge, are some example where alerts may be sent to all vehicles in the area. Accordingly, dynamically categorizing connected vehicles into location groups may be helpful for scoping the vehicles to be notified instead of a general broadcast and may reduce administrative overheads, since a single notification for the group may trigger alerts for all the members. Also, entities present at a location may have certain characteristics, such as a stop sign warning, speed limit, deer-threat etc., in common, which can be inherited by being a group member.FIG.2represents how various smart entities may be separated into different location groups defined by appropriate authorities in a smart city system. These groups may be dynamically assigned to connected vehicles based on their attributes, personal preferences, interests, or current GPS coordinates as further described below. Group hierarchies may also exist, as shown inFIG.3, with sub-groups within a larger parent group so as to reduce the number of vehicles to be notified. For instance, under a location group, sub-groups may be created for cars, buses, police vehicles, or ambulances, to enable targeted alerts to ambulances or police vehicle sub-groups defined within the location group. Groups may be defined based on services. For example, a group of cars within the car parent group which take part in a car-pooling (CP) service or those that want to receive gas station offers. Group hierarchy also enables attributes inheritance from parent to child groups. Access Control Model for Connected Vehicles Ecosystem. Dynamic communication and data exchange among entities in connected vehicles ecosystem may utilize multi-layer access control policies, which may be managed centrally and also driven by individual user preferences. Therefore, an access control model may incorporate all such user and system requirements and offer fine-grained authorization solutions. CV-ABACGModel Overview. An exemplary embodiment of a conceptual CV-ABACGmodel is shown inFIG.4with formal definitions summarized in Table 1, shown inFIG.13. The basic model may have, for example, the following components: Sources (S), Clustered Objects (CO), Objects in clustered objects (0), Groups (G), Operations (OP), Activities (A), Authorization Policies (POL), and Attributes (ATT). Sources (S): These entities may initiate activities (described below) on various smart objects, groups, and applications in the ecosystem. A source may be, for example, a user, an application, administrator, sensor, hand-held device, clustered object (such as a connected car), or a group defined in the system. For example, in the case of a flash flood warning, the activity source may be the police or a city department triggering an alert to all vehicles in the area. Similarly, a mechanic may be a source when he tries to access data from on-board sensor in the car using a remote cloud based application. Likewise, a restaurant or gas-station issuing coupons may also be considered sources. Clustered Objects (CO): Clustered objects may be relevant in the case of connected vehicles, traffic lights or smart devices held by humans, as they may have multiple sensors and actuators. A smart car with on-board sensors, ECUs, such as tire pressure, lane departure, or engine control, and applications, may be a clustered object. These smart entities may interact and exchange data among themselves and with others, such as a requestor source, applications, or the cloud. An important reason to incorporate clustered objects is to reflect cross-vehicle and intra-vehicle communication. The fact that two smart vehicles may exchange basic safety messages (BSM) with each other shows clustered object communication. Objects in clustered objects (O): These are individual sensors, ECUs, and applications installed in clustered objects. Objects in smart cars may include sensors for the internal state of the vehicle, such as engine diagnostics, emission control, cabin monitoring systems, as well as sensors for external environment, such as cameras, temperature, rain, etc. Control commands may be directly issued to these objects, and data may be read remotely. Applications, such as lane departure warning systems on board may also access data from these objects to provide alerts to a driver or to a remote service provider. Groups (G): A group is a logical collection of clustered objects with similar characteristics or requirements. With these groups, a subset of COs may be sent relevant notifications and also attributes may be assigned to group members. Some groups that may be defined in a smart vehicle ecosystem may include location specific groups, service specific groups, such as car-pooling, gas station promotions etc., or vehicle type, such as a group of cars, buses etc. Group hierarchy (GH) may enable attributes and policies inheritance from parent to children groups. In embodiments, a vehicle or CO may be a direct member of only one group at the same hierarchy level. For example, a car may be in either location A or B group and but not both. Such restrictions may help in managing attributes inheritance and may enhance the usability of the model. Operations (OP): Operations may include actions that may be performed against clustered objects, individual objects, or groups. Examples may include: a mechanic performing read, write, or control operations on engine ECU, a restaurant triggering notifications to vehicles in location A group. Operations may also include administrative actions such as creating or updating attributes or policies for COs, objects, and groups, which are usually performed by system/security administrators. Activities (A): Activities may encompass both operational and administrative activities that are performed by various sources in the system. An activity may have one or many atomic operations (OP) involved and may need authorization policies, which can be user privacy preferences, system defined, or both, to allow or deny an activity. For example, a car pooling notification activity generated by a requestor (source) may be broadcast to all relevant vehicles in the locations nearby using location groups. However individual drivers may also receive or respond to that request based on individual preferences. A driver may not want to car-pool with the requestor because of a poor rating or because he is not going to the destination the requestor asked for. Therefore, an activity may involve multiple sets of policies defined at different levels that must be evaluated. In the case of in car-pooling, a policy may be set to determine cars to be notified and then driver personal preferences. These smart car activities may be divided into categories such as: Service Requests: These may be activities initiated by entities or users (via applications). For example, a vehicle break-down may initiate a service request to other vehicles around, or a user using a smartphone may initiate a car-pooling request for a destination to cars which are available for the service. Administration: These activities may perform administrative operations in the system that may include changing policies and attributes of entities or determining the group hierarchy. It may also define the scope of groups, how user privacy preferences are used, or how vehicles are determined to be a member of a group, etc. Notifications: These may be group centric activities where all members may be notified of any updates about the group, such as speed limit or deer threat notifications in location A, or for location-based marketing promotions by parking lots or restaurants. Control and Usage: These activities may include simple read, write, or control operations performed remotely or within a vehicle. Over the air updates issued by manufacturers or turning on the car climate control using a smart key may be remote activities, whereas a passenger accessing the infotainment system using a smartphone and on-board car applications reading the car camera are local. Authorization Policies and Attributes: in embodiments, the CV-ABACGmodel may incorporate individual user privacy controls for different entities by managing authorization policies and entity attributes. As shown inFIG.4, a policy of sources may include personal preferences, whereas attributes may reflect characteristics such as name, age, or gender. Policies may be defined for clustered objects. For example, a USB device may be plugged-in only by a car owner, or only a mechanic may access an onboard sensor. Attributes of a car may include GPS coordinates, speed, heading direction, vehicle size, etc. Groups may also set policies and attributes for themselves. For example, a car pooling group policy may specify who can be member of the group. Similarly, system wide policies may also be considered. For example, a policy to determine which groups will be sent information when a request comes from a source, or policy to change group hierarchy. Policies may also include attributes of entities involved in an activity. A CO may inherit attributes from dynamically assigned groups, which may change as the CO leaves an old group and joins a new group. In embodiments, attributes of entities may change more often than system wide or individual policies. Attributes may be more dynamic in nature, and may be added or removed with the movement of vehicles or change in surroundings, such as GPS coordinates or temperature. Policies once set by administrators or users may be more static and only the attributes that comprise the policy may change the outcome of a policy, but the policy definition may remain relatively fixed. For example, a user policy may state ‘Send restaurant notifications only from the Cheesecake factory’. In such a case, only the attribute name of the restaurant sending the notification may be checked, and if it is equal to Cheesecake factory, may be able to advertise to that user. Dynamic policies may also be possible, for example, a policy may state that police vans in locations groups A and B may be notified in case of emergency, but, in case of a bigger threat this policy may be changed or overwritten with police vans in groups A, B C and D. The model may also assume that no policies or attributes are changed during an activity evaluation. Some activities may need multi-level policy evaluation and may include user privacy preferences. For example, a user may be allowed to decide if they want to share data from their car sensors or whether they want to get marketing advertisements. Each activity may evaluate required system and user policies to make a final decision. Formal Definitions. As shown in Table 1, shown inFIG.13, sources, clustered objects, objects, and groups may be directly assigned values from the set of atomic values (denoted by Range(att)) for attribute att in set ATT. Each attribute may be a set or atomic value, determined by the attType function and based on its type. Entities may be assigned a single value including null ( ) for an atomic attribute, or multiple values for set-valued attributes from the attribute range. POL may be the set of authorization policies defined in the system which will be defined below. Clustered objects may be members of different groups, based on preferences and requirements. For example, a car may be assigned to a location group based on its GPS coordinates. In embodiments, it may be assumed that a clustered object may be directly assigned to only one group at the same hierarchy level (specified by the directG function). As described below, since groups inherit attributes from parent groups, assigning a clustered object to one parent group may be sufficient to realize attributes inheritance. Smart cars may have sensors and applications installed in them, which can also be accessed by different sources. Therefore, the parentCO function determines the clustered object to which an object belongs, which is a one to many mapping, that is, an object may only belong to one CO while a CO may have multiple objects. Further, group hierarchy GH (shown as a self-loop on G), may be defined using a partial order relation on G and denoted by ≥g, where g1≥gg2signifies g1is child group of g2and g1inherits all the attributes of g2. Function parentG computes the set of parent groups in hierarchy for a child group. A benefit to introducing groups is the ease of administration, where multiple attributes can be assigned or removed from member clustered objects with a single administrative operation. Group hierarchy may enable attributes inheritance from parent to child groups. Therefore, in the case of set valued attributes, the effective attribute att of a group gi(denoted by effGatt(gi)) is the union of directly assigned values for attribute att and the effective values for att for all its parent groups in group hierarchy. This definition is well formed since ≥gis a partial order. For a maximal group gjin this ordering, we have effGatt gj=att(gj), giving base cases for this recursive definition. The effective attribute values of clustered object for attribute att (stated as effCOatt) will then be the directly assigned values for att and the effective attribute values of att for the group to which CO is directly assigned (by directG). Similarly, in addition to direct attributes, sensors in car may inherit attributes from the car itself, such as make, model, location, etc., effOattcalculates these effective attributes of objects. For set valued attributes, union operation may be sufficient, which may not be true for atomic attributes. In the case of groups, the most recently updated non-null attribute values in parent groups may overwrite the values of child groups as defined in Table 1. For example, if the most recent value updated in one of the parent groups for Deer_Threat attribute is ‘ON’, this value may trickle to the child group. It should be noted that overwriting with the most recently updated value in groups is one of the many approaches to inherit atomic attributes, but for the dynamic nature of smart cars ecosystem, this approach may be advantageous. Clustered objects may inherit non-null atomic values from its direct parent group as stated by effCOatt(co)=effGatt(directG (co)). In the case of objects, parent clustered object will overwrite non-null atomic attributes. For atomic attributes, if the parent(s) have null value for an attribute, the entity (group, clustered object, or object) may retain its directly assigned value without any overwrite. Authorization functions may be defined for each operation op∈OP, which are policies defined in the system. POL is the set of all authorization functions, Authop(s: S, ob: CO∪O∪G), which specify the conditions under which source s∈S can execute operation op∈OP on object ob∈CO∪O∪G. Such policies may include privacy preferences set by users for individual clustered objects, objects, and groups or may be system wide by security administrators. The conditions may be specified as propositional logic formula using policy language defined in Table 1. Multiple policies may be satisfied before an activity is allowed to perform. Authorization function, Authorization (a: A, s: S), where an activity a∈A is allowed by source s∈S, specifies the system level, user privacy policies or other relevant policies returning true for an activity to succeed. CV-ABACGis an attribute-based access control model which satisfies fine-grained authorization needs of dynamic, location oriented and time sensitive services and applications in cloud assisted smart cars ecosystem. The model may support personalized privacy controls by utilizing individual user policies and attributes, along with dynamic groups assignment. In embodiments, the model assumes that the information and attributes shared by source and object entities are trusted. For example, the model may assume that location coordinates sent by a car are correct, and may use this shared information to make access and notification decisions. CV-ABACGEnforcement in AWS. In embodiments, the CV-ABACGmodel may be used to enforce a use case of smart cars using, for example, the AWS IoT service. This example may demonstrate how dynamic groups assignment and multi-layer authorization policies in connected vehicle ecosystem may be realized in AWS. Simulations may be used to reflect real connected smart vehicles. In embodiments, no long term vehicle data including real-time GPS coordinates are collected in a central cloud, which mitigates user privacy concerns and encourages wide adoption of the model. Description of Use Cases. Location based alerts and notifications may be used in smart car applications and motivate the use case examples. An example of a defined group hierarchy in AWS is shown inFIG.5. The implementation may enforce access controls and service notification relevance in use cases such as: Deer Threat Notification—Smart infrastructure in the city may sense the surrounding environment and notify group(s) regarding the change. In this use case, a motion sensor may sense deer in the area and change Deer_Threat attribute of location group to ON, which in-turn sends alerts to all member vehicles in that location. Similarly, implementation may be done in case of accident notification, speed limit warning, or location based marketing. Car-Pooling—A traveler needs a ride to Location-A. Using a mobile application, they send car-pooling requests to vehicles in the vicinity that are heading to the destination location requested by the traveler. The request is received by AWS cloud, which computes location and appropriate groups based on the coordinates of the requester, to publish notifications to nearby cars. All the members of the group Car-A, B, C or D can get the request, but some cars may not want to be part of car-pooling, or do not want some requestors to join them because of ratings. User policies may be also checked before a driver is notified of a likely car-pool customer. Prototype Implementation. In embodiments, an exemplary AWS implementation of the model in these use-case examples may involve two phases: the administrative phase and the operational phase. The administrative phase involves creation of groups hierarchy, dynamic assignment of moving vehicles to different location and sub-groups, attributes inheritance from parent to child groups and to group members, and attributes modification of entities. The operational phase covers how groups are used to scope down the number of vehicles who receive messages or notifications from different sources. Both phases involve multi-layer access control polices. An ABAC policy decision (PDP) and enforcement point (PEP) were created, and an external policy evaluation engine was implemented, which was hooked with AWS to enable attribute-based authorization. Administrative Phase: A group hierarchy was created in AWS as shown inFIG.5. In this hierarchy, County-XYZ is divided into four disjoint Location-A, B, C and D groups, with each having Car and Bus subgroups for vehicle type car or bus. Ten vehicles were created and their movements were simulated using a python script, which publishes MQTT messages to shadows of these vehicles with current GPS coordinates (generated using Google API) iterated over dots shown inFIG.6. The area was demarcated into four locations and a moving vehicle belongs to a subgroup in one of these groups. Assuming a current location of Vehicle-1 as Location-D, and it publishes MQTT message with payload: {“state”: {“reported”: {“Latitude”: “29.4769353”, “Longitude”:“−98.5018237”}}} to AWS topic: $aws/things/Vehicle-1/shadow/update, its new location changes to Location-A and since the vehicle type was defined as car, it is assigned to Car-A group under Location-A as shown by the code snippet shown inFIG.7. Both attributes, vehicle type and current coordinates of vehicle, may be used to dynamically assign groups, which is important in moving smart vehicles. These functionalities may be implemented as a standalone service (can be enforced as a Lambda service [6] function) using Boto, which is the AWS SDK for Python. Further, in the case of the deer threat notification use-case example, a location-sensor was simulated that senses deer in the area and updates the attribute ‘Deer_Threat’ of location group to ‘ON’ or ‘OFF’. This is then notified to all members of location and its subgroups. An attribute-based policy was defined to control which sensors can change the ‘Deer_Threat’ attribute of location groups. As shown inFIG.8, the policy for Deer_Threat operation checks that a motion sensor with ID=‘1’ and current groups of Location-A can update the attribute Deer_Threat for group Location-A, and if the sensor is relocated to Location-B, it can update the attribute for Location-B group only. This policy ensures that the sensor must be in that location group for which it is updating Deer Threat attribute. The complete sequence of events performed in AWS along with the stand-alone service for the administrative phase is shown inFIG.9. A moving vehicle updates its coordinates to AWS shadow service, which, along with attributes of vehicles and location groups, determines if the vehicle can be a member of the group using the external enforcement service. If the authorization policy allows a vehicle to be a member of group, the vehicle and group is notified and the vehicle inherits all attributes of its newly assigned group. Similarly, if attribute ‘Deer_Threat’ of a group is allowed (by the authorization policy) to be changed by the location sensor, the new values are propagated to all its members. Attribute inheritance was implemented from parent to child groups through the service using the update thing_group and the update thing methods. In the use-case example attributes inheritance exists from Location-A to both subgroups Car-A and Bus-A, and to vehicles in Car-A and Bus-A. Therefore, when attribute ‘Deer_Threat’ is set to ON in group Location-A, its new attributes using Boto describe thing_group command are: {‘Center-Latitude’: ‘29.4745’, ‘Center-Longitude’: ‘−98.503’, ‘Deer_Threat’: ‘ON’} This inherits the attributes to Car-A child group whose effective attributes will now be: {‘Center-Latitude’: ‘29.4745’, ‘Center-Longitude’: ‘−98.503’, ‘Deer_Threat’: ‘ON’, ‘Location’: ‘A’} As shown inFIG.7, both Vehicle-1 and Vehicle-2 as member of Car-A, the effective attributes of Vehicle-2 are: {‘Center-Latitude’: ‘29.4745’, ‘Center-Longitude’: ‘−98.503’, ‘Deer_Threat’: ‘ON’, ‘Location’: ‘A’, ‘Type’: ‘Car’, ‘VIN’: ‘9246572903752’, ‘thingName’: ‘Vehicle-2’} Operational Phase: In this phase, attribute-based policies are used to restrict service and notification activities which may require single or multi-level policies along with user preferences. In car-pooling use case, policies were defined to restrict notifications to only a subset of relevant vehicles in specific locations. A requestor in AWS needing a car-pool was simulated. It has attribute ‘destination’ with value in Location-A, B, C or D. The requestor sends current and destination location as MQTT message to AWS topic $aws/things/Requestor/shadow/update, which, based on these attributes, determines subgroups to which service requests are sent. {“state”: {“reported”: {“policy”: “car_pool_notification”, “source”: “Location-A”, “destination”: “Location-B”}}} The policy for carpool notification operation (shown inFIG.8) suggests that if the current location of source requestor is location-A′ and the destination location is somewhere in ‘Location-A’, then all members of sub-group Car-A should be notified. Similarly, if the destination attribute is Location-B, then all members of Car-A, Car-B and Car-C will be notified. In the use-case example, all members of these sub-groups are notified. The policy restricts the number of vehicles which will be requested as compared to all vehicles getting irrelevant notification (as they are far from the requestor or are not vehicle type car) and illustrates the advantages of a location-centric smart car ecosystem. Similarly, location-based marketing may be restricted and policies may be defined to control such notifications. User privacy policies may take effect once the subset of vehicles is calculated. These policies may encapsulate user preferences. For example, in carpooling, a particular driver is not going to the destination requested by the requestor in his request or a driver does not want restaurant advertisements, therefore such notifications will not be displayed on the car dashboard. These local policies may be implemented using AWS Greengrass, which allows running of local lambda functions on the device (in this case a connected vehicle) to enable edge computing facility, an important advantage in real-time smart car applications and to enforce privacy policies. Once accepted by drivers, an AWS Simple Notification Service (SNS) message may be triggered for the requestor from accepting vehicles along with name and vehicle number. The sequence of events for car-pooling activity and multi-layer authorization policies together with user personal preferences is shown inFIG.10. An external service to implement ABAC policy decisions and evaluation may provide fine-grained authorization in smart cars ecosystems. The example also demonstrates dynamic groups assignment based on mobile vehicle GPS coordinates and attributes, along with groups based attributes inheritance, which offer administrative benefits in enforcing an ABAC model. In this example, no persistent data from moving vehicles is collected or stored by the central authority hosted cloud, which reaffirms its privacy preserving benefits. Note that the use-case examples described to enforce CV-ABACGare not real-time and can bear some latency due to the use of cloud infrastructure. Although CV-ABACGenforcement in AWS reflects its use for cloud based applications, similar models may also be implemented in edge (or fog) systems as well to cater to more real-time use-cases. Performance Evaluation. The performance of embodiments of the CV-ABACGmodel in AWS was evaluated and different metrics were provided when no policies were used against the implemented ABAC policies for the car-pooling notification use-case example. As shown inFIG.11, the external policy evaluation engine has average time (in milliseconds) to decide on car-pooling service requests and provide the subset of cars which are notified. This scoping ensures the service relevance, as without a policy, all 5 vehicles were sent car-pool requests (even when one was 20 miles away from the requestor), whereas with attribute based policies, only nearby cars are notified. The performance graph shown inFIG.12compares no policy execution time (bottom line) against implemented ABAC policy (top line). Since, in the experiments, the policy (shown inFIG.8) evaluated for each access request was the same, a linear graph results, as the number of access requests increase the number of times the policy is evaluated and so its total evaluation time. Some variation in the bottom line occurs because of the network latency time to access the AWS cloud, although this can change based on the communication technologies used by vehicles including 3G, LTE, cellular or dedicated short-range communications (DSRC). The external policy engine does have some impact on the performance against no policy when used with a number of vehicles. However, when used in city wide scenario, this time will be overshadowed by the notification time to all vehicles against a subset of vehicles provided by the policy evaluation engine. The model and the use-case example is focused to ensure service relevance to moving drivers on road which is well achieved even with a little tradeoff. Embodiments may provide a fine-grained attribute-based access control model for time-sensitive and location-centric smart cars ecosystem. The model may provide dynamic groups in relation to connected vehicles and may emphasize the relevance in this context. Besides considering system wide authorization policies, the model may also support personal preference policies for different users, which is advantageous in today's privacy conscious world. Several real world use-case examples and a proof of concept implementation of the CV-ABACGmodel demonstrates how moving vehicles may be dynamically assigned to location and sub-groups defined in the system based on the current GPS coordinates, vehicle-type, and other attributes, besides the use of attribute based security policies in distributed and mobile connected cars ecosystem. Further, location privacy preserving approaches such as homomorphic encryption and other anonymity techniques may be used to complement and extend the model which can mitigate location sharing concerns without affecting its advantages and application. An exemplary block diagram of a computer system1400, in which processes involved in the embodiments described herein may be implemented, is shown inFIG.14. Computer system1400may be implemented using one or more programmed general-purpose computer systems, such as embedded processors, systems on a chip, personal computers, workstations, server systems, and minicomputers or mainframe computers, or in distributed, networked computing environments. Computer system1400may include one or more processors (CPUs)1402A-1402N, input/output circuitry1404, network adapter1406, and memory1408. CPUs1402A-1402N execute program instructions in order to carry out the functions of the present communications systems and methods. Typically, CPUs1402A-1402N are one or more microprocessors, such as an INTEL CORE® processor.FIG.14illustrates an embodiment in which computer system1400is implemented as a single multi-processor computer system, in which multiple processors1402A-1402N share system resources, such as memory1408, input/output circuitry1404, and network adapter1406. However, the present communications systems and methods also include embodiments in which computer system1400is implemented as a plurality of networked computer systems, which may be single-processor computer systems, multi-processor computer systems, or a mix thereof. Input/output circuitry1404provides the capability to input data to, or output data from, computer system1400. For example, input/output circuitry may include input devices, such as keyboards, mice, touchpads, trackballs, scanners, analog to digital converters, etc., output devices, such as video adapters, monitors, printers, etc., and input/output devices, such as, modems, etc. Network adapter1406interfaces device1400with a network1410. Network1410may be any public or proprietary LAN or WAN, including, but not limited to the Internet. Memory1408stores program instructions that are executed by, and data that are used and processed by, CPU1402to perform the functions of computer system1400. Memory1408may include, for example, electronic memory devices, such as random-access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), flash memory, etc., and electro-mechanical memory, such as magnetic disk drives, tape drives, optical disk drives, etc., which may use an integrated drive electronics (IDE) interface, or a variation or enhancement thereof, such as enhanced IDE (EIDE) or ultra-direct memory access (UDMA), or a small computer system interface (SCSI) based interface, or a variation or enhancement thereof, such as fast-SCSI, wide-SCSI, fast and wide-SCSI, etc., or Serial Advanced Technology Attachment (SATA), or a variation or enhancement thereof, or a fiber channel-arbitrated loop (FC-AL) interface. The contents of memory1408may vary depending upon the function that computer system1400is programmed to perform. In the example shown inFIG.14, exemplary memory contents are shown representing routines and data for embodiments of the processes described above. However, one of skill in the art would recognize that these routines, along with the memory contents related to those routines, may not be included on one system or device, but rather may be distributed among a plurality of systems or devices, based on well-known engineering considerations. The present communications systems and methods may include any and all such arrangements. In embodiments, at least a portion of the software shown inFIG.14may be implemented on a current leader server. Likewise, in embodiments, at least a portion of the software shown inFIG.14may be implemented on a computer system other than the current leader server. In the example shown inFIG.14, memory1408may include application layer1412, cloud services layer1414, virtual object layer1416, object layer1418, and operating system1420. Application layer1412may include software routines and data to provide application services, as described above. Cloud services layer1414may include software routines and data to provide cloud services, as described above. Virtual object layer1416may include software routines and data to provide virtual object services to act as an intermediate between cloud services and the physical layer, which offers abstraction by creating cyber entities for physical objects in object layer1418, as described above. Virtual object layer1416may include clustered objects1422and objects1424. Clustered objects1422may include software routines and data to provide operation and applications of objects with multiple individual sensors in virtual object layer1416, as described above. Objects1424may include software routines and data to provide operation and applications of objects in virtual object layer1416, as described above. Object layer1418may include software routines and data to provide object services to represent physical clustered objects and sensors along with applications installed on them. Object layer1418may include clustered objects1426, objects1428, and apps1430. Clustered objects1426may include software routines and data to provide operation of objects1428with multiple individual sensors in object layer1418, as described above. Objects1428may include software routines and data to provide operation devices, such as sensors, in object layer1418, as described above. Apps1430may include software routines and data to provide applications of objects1428and clustered objects1426in object layer1418. Operating system1434may provide overall system functionality. As shown inFIG.14, the present communications systems and methods may include implementation on a system or systems that provide multi-processor, multi-tasking, multi-process, and/or multi-thread computing, as well as implementation on systems that provide only single processor, single thread computing. Multi-processor computing involves performing computing using more than one processor. Multi-tasking computing involves performing computing using more than one operating system task. A task is an operating system concept that refers to the combination of a program being executed and bookkeeping information used by the operating system. Whenever a program is executed, the operating system creates a new task for it. The task is like an envelope for the program in that it identifies the program with a task number and attaches other bookkeeping information to it. Many operating systems, including Linux, UNIX®, OS/2®, and Windows®, are capable of running many tasks at the same time and are called multitasking operating systems. Multi-tasking is the ability of an operating system to execute more than one executable at the same time. Each executable is running in its own address space, meaning that the executables have no way to share any of their memory. This has advantages, because it is impossible for any program to damage the execution of any of the other programs running on the system. However, the programs have no way to exchange any information except through the operating system (or by reading files stored on the file system). Multi-process computing is similar to multi-tasking computing, as the terms task and process are often used interchangeably, although some operating systems make a distinction between the two. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Although specific embodiments of the present invention have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims. | 55,817 |
11858518 | DETAILED DESCRIPTION Some exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings. In the following description, like reference numerals preferably designate like elements, although the elements are shown in different drawings. Further, in the following description of some embodiments, a detailed description of known functions and configurations incorporated herein will be omitted for the purpose of clarity and for brevity. Additionally, alphanumeric code such as first, second, i), ii), (a), (b), etc., in numbering components are used solely for the purpose of differentiating one component from the other but not to imply or suggest the substances, the order or sequence of the components. Throughout this specification, when a part “includes” or “comprises” a component, the part is meant to further include other components, not excluding thereof unless there is an explicit description contrary thereto. FIG.1is a block diagram of a radar device according to an embodiment of the present disclosure. Referring toFIG.1, a radar device100includes all or some of a sensor unit110, a radar control unit120, and a display unit130. The sensor unit110includes an illuminance sensor112, a rain sensor114, an image sensor116, and a radar sensor118. The plurality of sensors included in the sensor unit110measures driving environment information, identification information, and the like. Here, the driving environment information may be illuminance, a falling speed of rainwater, an amount of rainwater, and the like. On the other hand, identification information refers to information on objects around a vehicle, such as lanes, vehicles, and roads, when the vehicle is driving. The illuminance sensor112is a sensor for detecting illuminance. Here, the illuminance means a density of a light flux measured based on sensitivity of an eye. More specifically, the illuminance refers to the amount of light received by a unit area for a unit time. Although various photocells may be used as the illuminance sensor112, a phototube or the like is also used for measuring very low illuminance. The illuminance sensor112according to the embodiment of the present disclosure transmits illuminance information generated by sensing illuminance to the radar control unit120. The radar control unit120determines whether a driving environment is day or night using the received illuminance information. The rain sensor114is a sensor that detects a falling amount and a falling speed of rainwater falling on a vehicle. The rain sensor114is applied to reduce the possibility of an accident or inconvenience in driving that may occur when a driver turns his/her eyes or makes an unnecessary operation in order to control whether or not the driver operates a wiper or an operation speed of the wiper while driving. When rainwater falls on a windshield of a vehicle, the rain sensor114detects the amount and falling speed of rainwater using infrared rays by a sensor installed in an upper center of the windshield. The rain sensor114transmits rain information generated using the falling speed or the like of rainwater to the radar control unit120. The radar control unit120determines whether the driving environment is a rainy state using the received rain information. A plurality of image sensors116may be disposed on front, rear, and left and right surfaces of a vehicle. The image sensor116may be disposed in a vehicle to be utilized for various functions, such as a blind spot detection function, an emergency collision prevention function, and a parking collision prevention function. The image sensor116transmits image information generated by photographing views in an outward direction of the vehicle, that is, spatial information, to the radar control unit120. A plurality of radar sensors118may be disposed on the front, rear, and left and right surfaces of the vehicle. The radar sensor118may be disposed in a vehicle to be utilized for various functions, such as a forward vehicle tracking function, a blind spot detection function, an emergency collision prevention function, and a parking collision prevention function. The radar sensor118may include a plurality of transmitting units and a plurality of receiving units. The radar signals transmitted from the transmitting units of one or more radar sensors118may be received by all of the plurality of receiving units. That is, the radar sensor118may have a radar structure of a multi-input multi-output (MIMO) system. Each radar sensor118may have a field of view (FOV) of 118° to 160°. The radar sensor118may be an ultra wide band (UWB) radar sensor that transmits and receives a radar signal in an ultra wide frequency band, or a frequency modulated continuous-wave (FMCW) radar sensor that transmits and receives a radar signal including a modulated frequency signal. The radar sensor118may adjust a detection range by adjusting an output value of a transmission signal using an amplifier (not illustrated) disposed therein. The radar control unit120includes a driving environment determination unit122and a target information determination unit124. The radar control unit120obtains an input signal for control from a driver and determines whether to start a control operation. When the input signal for control is received, the radar control unit120may determine whether to start a control operation based on the input signal. The input signal may be any signal for starting a control operation. For example, the input signal may be a button input signal or a touchpad input signal for activating a control function. In addition, the input signal is not limited thereto and includes any signal for starting control. In addition, the radar control unit120also determines whether the operation of the radar sensor118is started. The driving environment determination unit122determines the driving environment using a signal transmitted by the sensor unit110, for example, driving environment information. For example, whether the driving environment is a day state or a night state is determined using illuminance information, or whether the driving environment is a rainy state is determined using the rain information. The operation of determining the rainy state and the day or night state will be described in more detail with reference toFIG.3. The radar control unit120determines which of the image information measured by the image sensor116and the radar information measured by the radar sensor118to use first according to the driving environment determined by the driving environment determination unit122. For example, when the image sensor116is able to detect and distinguish lanes with high accuracy by clearly capturing the surroundings of the vehicle, the image sensor116uses both the image information and the radar information and uses the radar information when it is not possible to accurately distinguish lanes, roads, or the like. In the detailed description of the present disclosure, when the image sensor116cannot accurately distinguish lanes, roads, or the like, it means, for example, the driving environment is a rainy state and a night state, and when the image sensor116is able to accurately distinguish lanes and roads, it means, for example, the driving environment is a day state. The operation of determining the night state and the rainy state will be described in detail with reference toFIG.3. The target information determination unit124includes a signal processing unit124athat removes clutter included in a signal received by the radar sensor118and a target information calculation unit124bthat calculates target information based on the signal processed by the signal processing unit124a. Here, the clutter means unwanted noise that is reflected and received from an object that is not a radar target. The radar control unit120may control the radar sensor118to change a frequency bandwidth of the radar signal transmitted by the radar sensor118, a detection angle of the radar sensor118, or a detection range of the radar sensor118. For example, when the control operation is started, the radar control unit120may control the radar sensor118electronically or mechanically to change at least one of the detection angle and the detection range detected by the radar sensor118. The signal processing unit124aremoves signal components except for a frequency band corresponding to the radar signal transmitted by the radar sensor118using a band-pass filter. A moving average method may be used to remove the clutter. Here, the moving average method is a method of knowing the overall trend by using an average value of data on a time series in a certain period. A filtering method for the signal processing unit124ato remove the clutter is not limited thereto, and those skilled in the art may add, remove, or change other types of filters. Also, the signal processing unit124amay increase the resolution of the display unit130by extending the frequency bandwidth of the transmitted radar signal. The target information determination unit124receives the signal processed by the signal processing unit124aand determines the target. An operation of determining, by the target information determination unit124, the target and transmitting information on the determined target to the radar control unit120will be described in more detail. When the radar sensor118detects the electromagnetic waves reflected from the target and transmits the detected electromagnetic waves to the radar control unit120, the target information determination unit124determines the information on the target and a distance, azimuth, or the like from the vehicle according to the intensity of the electromagnetic waves. Among a first reference value, a second reference value, and a third reference value, the first reference value is the smallest, the second reference value is second smallest, and the third reference value is the largest. For each reference value according to an embodiment of the present disclosure, the first reference value may be −100 dBsm, the second reference value may be −10 dBsm, and the third reference value may be 10 dBsm. For example, when the intensity of the reflected electromagnetic wave is less than or equal to the first reference value or the electromagnetic wave is not reflected, the target information determination unit124determines that the target is a lane. On the other hand, when the intensity of the reflected electromagnetic wave exceeds the first reference value and is equal to or less than the second reference value, the target information determination unit124determines that the target is a road. Meanwhile, when the intensity of the reflected electromagnetic wave exceeds the second reference value and is equal to or less than the third reference value, the target information determination unit124determines that the target is a pedestrian or an animal. Finally, when the intensity of the reflected electromagnetic wave exceeds the third reference value, the target information determination unit124determines that the target is a vehicle or a roadside facility. The reason why the target information determination unit124determines that the target is a lane will be described in more detail. When a lane or a crosswalk sign of a general road is coated with an electromagnetic wave absorbing material, even when the radar sensor118of the vehicle transmits a radar signal, the transmitted radar signal, for example, the electromagnetic wave, is absorbed by the lane coated with the absorbing material, and the transmitted radar signal is not reflected or the intensity of the electromagnetic wave is insignificant even when the transmitted radar signal is reflected. Accordingly, the target information determination unit124may determine that the target is a lane. Meanwhile, the target information determination unit124may determine a distance and an azimuth between the vehicle and the target by using the reflected radar signal. Conventional techniques related to the calculation of the distance and azimuth are obvious to those skilled in the art, and thus, illustrations and descriptions thereof will be omitted. After the target information determination unit124determines the target, the determination result, for example, the target information, is transmitted to the radar control unit120. The radar control unit120transmits image information to the display unit130so that the target information is displayed to a driver as dots and lines and displayed through the display unit. The display unit130displays the image signal received from the radar control unit120through the display screen. FIG.2is a flowchart illustrating an operation of determining, by a radar control unit, target information according to an embodiment of the present disclosure. In the detailed description of the present disclosure, dBsm means decibel/m2. For example, X dBsm is calculated as log(X×m2). Meanwhile, in the detailed description of the present disclosure, a radar cross section (RCS) is a measure indicating how well an object is reflected by radar as an area. The radar control unit120may measure the RCS to determine how large a target captured by the radar appears as an object on the radar signal. The radar control unit120according to the embodiment of the present disclosure determines whether the object is one of a vehicle, a pedestrian, a road, and a lane according to a size of the RCS reflected from the object. Describing this in more detail, the radar control unit120determines whether to start radar control. Whether to start the control may be determined by acquiring an input signal for control from a driver and determining whether to start a radar control operation based on the input signal. For example, when the driver inputs a control start signal using the radar information or the image information, the radar control unit120starts the operation of the radar sensor118(S210). The radar control unit120determines whether the RCS value measured by the radar sensor is greater than the first reference value (S220). Here, the first reference value may be, for example, −100 dBsm. When the RCS measurement value is less than or equal to the first reference value, the radar control unit120determines that the target is a lane (S222). When the RCS measurement value exceeds the first reference value, the radar control unit120determines whether the RCS value measured by the radar sensor is greater than the second reference value (S230). Here, the second reference value may be, for example, −10 dBsm. When the RCS measurement value is equal to or less than the second reference value and exceeds the first reference value, the radar control unit120determines that the target is a road. Meanwhile, when the measurement value exceeds the second reference value, the radar control unit120determines whether the RCS value measured by the radar sensor is greater than the third reference value (S240). When the RCS measurement value is equal to or less than the third reference value and exceeds the second reference value, the radar control unit120determines that the target is a pedestrian or an animal. Meanwhile, when the RCS measurement value exceeds the third reference value, the radar control unit120determines that the target is a vehicle or a roadside facility (S244). When the radar control unit120completes object determination, the present algorithm ends. FIG.3is a flowchart illustrating an operation of determining, by a radar control unit, a driving environment according to an embodiment of the present disclosure. Referring toFIG.3, the radar control unit120according to the embodiment of the present disclosure determines driving environment information using illuminance information or rain information. The radar control unit120determines whether to start the control. Whether to start the control may be determined by acquiring an input signal for control from a driver and determining whether to start a control operation based on the input signal. For example, when the driver inputs the control start signal using the radar information or the image information, the radar control unit120starts the operation of the image sensor116and the radar sensor118(S310). When the control is started, the illuminance sensor112detects the illuminance outside the vehicle and transmits the generated illuminance information to the radar control unit120. The radar control unit120performs an operation of determining whether an illuminance value in the illuminance information generated by the illuminance sensor112exceeds an illuminance reference value (S320). When the illuminance value is less than or equal to the illuminance reference value, the radar control unit120determines that the outside is a night state (S322). When it is determined that the outside is the night state, the radar control unit120controls the display unit130to display views in an outward direction of the vehicle to the driver by preferentially using the radar information over the image information (S324). The reason why is when the outside is the night state, it is difficult for the image sensor116to recognize a lane and a road with high accuracy, and thus, the accuracy in determining the driving environment information is deteriorated. Therefore, in the operation S324, since the control unit determines that the external state is the night state, the information measured by the radar sensor118is preferentially used over the image information measured by the image sensor116. Meanwhile, when the illuminance value exceeds the illuminance reference value, the radar control unit120performs an operation of determining whether the rain information measured by the rain sensor114, for example, the falling speed of rainwater or the amount of rainwater exceeds the rain reference value (S330). When the amount or the falling speed of rainwater exceeds the rain reference value, the radar control unit120determines that the outside is the rainy state (S332). When it is determined that the outside is the rainy state, the radar control unit120preferentially uses the radar information over the image information (S334). The reason why is, when the outside is the rainy state, it is difficult for the image sensor116to clearly recognize a lane and a road, and thus, the accuracy in determining the driving environment information is deteriorated. Therefore, in the operation S334, since the control unit determines that the external state is the rainy state, the information measured by the radar sensor118is preferentially used over the image information measured by the image sensor116. Meanwhile, in the case of determining that the illuminance value exceeds the illuminance reference value, when it is determined that the amount or falling speed of rainwater is less than or equal to the rain reference value, it is determined that the outside is a day state without rain (S340). When it is determined that the outside is the day state, the radar control unit120uses both the image information and the radar information to control the display unit130to display views in an outward direction of the vehicle to the driver. In this case, the image information is used preferentially, and the radar information is also used to increase accuracy (S342). The reason why is, when the outside is the night state or the rainy weather, the image sensor may not clearly recognize the lane and the road, and thus the accuracy in determining the driving environment information is deteriorated, but when the outside is the day time, the image sensor116may accurately recognize the road and the land, and thus the image information measured by the image sensor is preferentially used in operation S342. Although exemplary embodiments of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, and substitutions are possible, without departing from the idea and scope of the claimed invention. Therefore, exemplary embodiments of the present disclosure have been described for the sake of brevity and clarity. The scope of the technical idea of the present embodiments is not limited by the illustrations. Accordingly, one of ordinary skill would understand the scope of the claimed invention is not to be limited by the above explicitly described embodiments but by the claims and equivalents thereof. REFERENCE NUMERALS 100: radar device110: sensor unit112: illumination sensor114: rain sensor116: image sensor118: radar sensor120: radar control unit122: driving environment determination unit124: target information determination unit124a: signal processing unit124b: target information calculation unit130: display unit | 20,850 |
11858519 | DETAILED DESCRIPTION Many vehicle accidents occur within a driver's community. While drivers generally would prefer their communities be safer for driving, individual drivers currently have no way to influence or improve the driving behaviors of other drivers in their communities. However, if made aware of driving trends within their communities, drivers may be motivated to improve their driving to improve the safety of the community as a whole. Accordingly, collisions that occur between drivers in the community may be reduced. Systems and methods of reducing vehicle collisions based on driver risk groups are provided herein. Drivers may be sorted into driver risk groups based on communities of which they are a part and other shared characteristics between drivers. For example, drivers may be sorted into driver risk groups based on where they live (e.g., street, neighborhood city, state, etc.), based on where they work, based on shared demographic characteristics, based on where they go to school, based on shared hobbies or interests, etc. In some instances, a driver may be part of multiple driver risk groups. Vehicle telematics data (e.g., vehicle sensor data, such as: speed data, acceleration data, braking data, cornering data, following distance data, turn signal data, seatbelt use data, etc.) associated with drivers who are part of a particular driver risk group may be captured by sensors associated with vehicles and/or sensors associated with mobile devices disposed therein. This vehicle telematics data may be analyzed to categorize the overall safe driving of the community. In some instances, categorizing the overall safe driving of the community may include associating a score or rating with a particular driver risk group. Drivers may be notified and/or updated of the overall safe driving of individuals in driver risk groups to which they belong, and may in turn be motivated to drive more safely in order to improve the safety of a group, or in order to maintain a group's existing high standards for safe driving. Furthermore, in some instances, an indication of the safe driving of a driver risk group may be provided to a third party such as, e.g., a vehicle rental service, a used car dealership, an insurance company, etc., which may in turn provide rewards, discounts, access to certain programs, or other incentives to individuals who are part of driver risk groups that drive safely. For example, drivers from a particular neighborhood may receive a reward from these third parties based on the percentage of drivers in that group who drive at a safe speed, e.g., over the course of a certain amount of time, or with a certain frequency. In some instances, the rewards may be comparative or competitive. For instance, drivers who are fans of one sports team may receive a reward based on whether they drive more safely than drivers who are fans of a rival sports team. Furthermore, the rewards may include a challenge component, e.g., rewards based on which community drives more safely over the course of a month or year. In some examples, groups may track their progress against other groups via a mobile or web application, which could include a leaderboard. Referring now toFIG.1A, an exemplary computer system100for reducing vehicle collisions based on driver risk groups is illustrated, in accordance with some embodiments. The high-level architecture illustrated inFIG.1Amay include both hardware and software applications, as well as various data communications channels for communicating data between the various hardware and software components, as is described below. As shown inFIG.1A, a plurality of mobile devices and/or on-board computers102A,102B,102C (shown in greater detail atFIG.1B) associated with respective vehicles104A,104B,104C (which may be, e.g., cars, trucks, boats, motorcycles, motorized scooters, or any other vehicles) may interface with respective sensors106A,106B,106C, which may capture vehicle telematics data and other suitable data associated with their respective vehicles104A,104B,104C. The operators of the vehicles104A,104B,104C may collectively be referred to as a driver risk group107. The vehicle operators of the driver risk group107may share one or more attributes with one another, or may otherwise be part of a shared community. Although only three vehicles104A,104B,104C are shown in the driver risk group107inFIG.1A, there may be any number of vehicles in a driver risk group107in various embodiments. The mobile devices and/or on-board computers102A,102B,102C may be configured to communicate the captured sensor data to a server108via a network110. By analyzing this captured sensor data, the server108may identify indications of safe driving behavior by vehicle operators associated with the driver risk group107. In some embodiments, the server108may provide a notification or update of safe driving behaviors associated with the driver risk group107(e.g., as shown inFIG.2A) to a mobile device associated with a vehicle operator in the driver risk group107, such as, for instance, mobile device102A,102B,102C (e.g., via the network110). Additionally or alternatively, the server108may provide an indication of the driver risk group107's progress in a safe driving competition against other driver risk groups (e.g., as shown inFIG.2B) to a mobile device associated with a vehicle operator in the driver risk group107, such as, for instance, mobile device102A,102B,102C (e.g., via the network110). Upon receiving a third-party query regarding the vehicle operator (e.g., via the network110), the server108may provide an indication of a safe driving behavior associated with the driver risk group107to a third-party computing device132(e.g., via the network110) for display via a user interface152, e.g., as shown inFIG.2C. As shown inFIG.1A, the server108may include a controller112that may be operatively connected to the one or more databases114via a link, which may be a local or a remote link. The one or more databases114may be adapted to store data related to, for instance, driver risk groups and associated characteristics/communities/attributes, vehicle operators in each driver risk group, vehicle telematics data associated with various vehicle operators in each driver risk group, vehicle telematics data trends and/or thresholds indicating safe driving behaviors, various scores and/or categorizations of safe driving behaviors in each driver risk group, various competitions between driver risk groups, third parties to which driver risk group information may be provided, etc. It should be noted that, while not shown, additional databases may be linked to the controller112. Additionally, separate databases may be used for various types of information, in some instances, and additional databases (not shown) may be communicatively connected to the server108via the network110. The controller112may include one or more program memories116, one or more processors118(which may be, e.g., microcontrollers and/or microprocessors), one or more random-access memories (RAMs)120, and an input/output (I/O) circuit122, all of which may be interconnected via an address/data bus. Although the I/O circuit122is shown as a single block, it should be appreciated that the I/O circuit122may include a number of different types of I/O circuits. The program memory116and RAM120may be implemented as semiconductor memories, magnetically readable memories, optically readable memories, or biologically readable memories, for example. Generally speaking, the program memory116and/or the RAM120may respectively include one or more non-transitory, computer-readable storage media. The controller112may also be operatively connected to the network110via a link. The server108may further include a number of various software applications124,126,128,130stored in the program memory116. Generally speaking, the applications may perform one or more functions related to, inter alia, classifying vehicle operators into driver risk groups based on attributes shared by the vehicle operators, analyzing vehicle sensor data associated with vehicle operators of a driver risk group, identifying indications of safe driving behavior associated with a driver risk group, receiving third-party queries, providing indications of safe driving behavior associated with driver risk group to third parties, generating user interface displays indicating safe driving behaviors associated with driver risk groups and results of driver risk group competitions, etc. For example, one or more of the applications124,126,128,130may perform at least a portion of any of the method300shown inFIG.3. The various software applications124,126,128,130may be executed on the same processor126or on different processors. Although four software applications124,126,128,130are shown inFIG.1A, it will be understood that there may be any number of software applications124,126,128,130. Further, two or more of the various applications124,126,128,130may be integrated as an integral application, if desired. It should be appreciated that although the server108is illustrated as a single device inFIG.1A, one or more portions of the server108may be implemented as one or more storage devices that are physically co-located with the server108, or as one or more storage devices utilizing different storage locations as a shared database structure (e.g. cloud storage). In some embodiments, the server108may be configured to perform any suitable portion of the processing functions remotely that have been outsourced by the on-board computers and/or mobile devices102A,102B,102C. Turning now to the third-party computing device132, this computing device may include a user interface152, as well as controller134, which may include one or more program memories136, one or more processors138(which may be, e.g., microcontrollers and/or microprocessors), one or more random-access memories (RAMs)140, and an input/output (I/O) circuit142, all of which may be interconnected via an address/data bus. Although the I/O circuit142is shown as a single block, it should be appreciated that the I/O circuit142may include a number of different types of I/O circuits. The program memory136and RAM140may be implemented as semiconductor memories, magnetically readable memories, optically readable memories, or biologically readable memories, for example. Generally speaking, the program memory136and/or the RAM140may respectively include one or more non-transitory, computer-readable storage media. The controller134may also be operatively connected to the network110via a link. The third-party computing device132may further include a number of various software applications144,146,148,150stored in the program memory136. Generally speaking, the applications may perform one or more functions related to, inter alia, receiving queries regarding vehicle operators from a third-party user, transmitting queries regarding vehicle operators to the server108(e.g., via the network110), receiving indications of safe driving behaviors associated with vehicle operators and/or their driving risk groups from the server108(e.g., via the network110), displaying indications of safe driving behaviors associated with vehicle operators and/or their driving risk groups (e.g., via the user interface152), etc. For example, one or more of the applications144,146,148,150may perform at least a portion of any of the method300shown inFIG.3. The various software applications144,146,148,150may be executed on the same processor138or on different processors138. Although four software applications144,146,148,150are shown inFIG.1A, it will be understood that there may be any number of software applications144,146,148,150. Further, two or more of the various applications144,146,148,150may be integrated as an integral application, if desired. Referring now toFIG.1B, an exemplary mobile device and/or onboard computer102A,102B,102C associated with a respective vehicles104A,104B,104C is illustrated in greater detail, in accordance with some embodiments. The mobile device and/or onboard computer102A,102B,102C may include one or more of a GPS unit154, an accelerometer156, one or more other sensors158, a communication unit160, and/or a controller162. The GPS unit154may be disposed at the mobile device and/or onboard computer102A,102B,102C and may collect data indicating the location of the mobile device and/or onboard computer102A,102B,102C, and/or (e.g., by proxy) the respective vehicle104A,104B,104C. Moreover, in some embodiments the GPS unit140may be a separate device disposed within or external to the respective vehicle104A,104B,104C (e.g., one of the sensors106A,106B,106C), and interfacing with the mobile device and/or onboard computer102A,102B,102C. The accelerometer156may be disposed at the mobile device and/or onboard computer102A,102B,102C and may collect data indicating the acceleration of the mobile device and/or onboard computer102A,102B,102C and/or (e.g., by proxy) the respective vehicle104A,104B,104C. Moreover, in some embodiments the GPS unit156may be a separate device disposed within or external to the vehicle104A,104B,104C (e.g., one of the sensors106A,106B,106C), and interfacing with the mobile device and/or onboard computer102A,102B,102C. In general, the GPS unit154, an accelerometer156, one or more other sensors158, and the sensors106A,106B,106C may be configured to capture vehicle sensor data associated with the vehicle104A,104B,104C, e.g., one or more of speed data, acceleration data, braking data, cornering data, following distance data, turn signal data, seatbelt use data, location data, date/time data, or any other suitable vehicle sensor data. The communication unit160may be disposed at the mobile device and/or onboard computer102A,102B,102C and may, e.g., transmit and receive information from external sources such as, e.g., the server108and/or the third-party computing device132, e.g., via the network110. As shown inFIG.1B, the mobile device and/or onboard computer102A,102B,102C may include a controller162may include one or more program memories164, one or more processors166(which may be, e.g., microcontrollers and/or microprocessors), one or more random-access memories (RAMs)168, and an input/output (I/O) circuit170, all of which may be interconnected via an address/data bus. Although the I/O circuit170is shown as a single block, it should be appreciated that the I/O circuit170may include a number of different types of I/O circuits. The program memory164and RAM168may be implemented as semiconductor memories, magnetically readable memories, optically readable memories, or biologically readable memories, for example. Generally speaking, the program memory164and/or the RAM168may respectively include one or more non-transitory, computer-readable storage media. The controller162may also be operatively connected to the network110via a link. The mobile device and/or onboard computer102A,102B,102C may further include a number of various software applications172,174,176,178stored in the program memory164. Generally speaking, the applications may perform one or more functions related to, inter alia, capturing vehicle sensor data associated with vehicle operators; transmitting the vehicle sensor data to the server108, etc. In some instances, one or more of the applications172,174,176,178may perform at least a portion of any of the method300shown inFIG.3. The various software applications172,174,176,178may be executed on the same processor166or on different processors. Although four software applications172,174,176,178are shown inFIG.1B, it will be understood that there may be any number of software applications172,174,176,178. Further, two or more of the various applications172,174,176,178may be integrated as an integral application, if desired. Additionally, it should be appreciated that in some embodiments, the mobile device and/or onboard computer102A,102B,102C may be configured to perform any suitable portion of the processing functions described as being performed by the server108. Turning now toFIGS.2A,2B, and2C, several exemplary user interface displays are illustrated, in accordance with some embodiments. As shown inFIG.2A, a vehicle operator may be notified and/or updated of save driving behaviors associated with her or her driver risk group via a mobile device notification displayed on a user interface. The mobile device notification may indicate, for instance, a score or rating for the safe driving behavior associated with the driver risk group community. In some instances, receiving notifications as shown inFIG.2Amay inspire or motivate a vehicle operator to drive more safely, or continue to drive safely, to improve or maintain the safe driving behaviors associated with his or her driver risk group. As shown inFIG.2B, a leaderboard indicating the safe driving behaviors associated with several driver risk groups may be displayed on a user interface. The leaderboard indicates which driver risk groups (in this case, cities) have scored the highest in a challenge, “Which community will be the best at following speed limits this summer?” Participating in a challenge as shown inFIG.2Bmay encourage friendly competition between driver risk groups, motivating vehicle operators associated with each group to drive more safely. As shown inFIG.2C, in response to a third-party query regarding a vehicle operator in the driver risk group, one or more indicia of safe driving behavior associated with the driver risk group may be provided to the third party via a user interface. For example, as shown inFIG.2C, a third party may search for a particular vehicle operator (“John A. Operator”), and search results indicating one or more driver risk groups (e.g., Chicago) with which the vehicle operator is associated may be provided. The search results further include safe driving behaviors associated with the vehicle operator's driver risk group. In some examples, the third party may in turn provide rewards, incentives, discounts, and/or access to certain selective programs or events to the vehicle operator based on the safe driving behavior associated with the vehicle operator's driver risk group. Turning now toFIG.3, a flow diagram of an exemplary computer-implemented method of reducing vehicle collisions based on driver risk groups is illustrated, in accordance with some embodiments. The method300can be implemented as a set of instructions stored on a computer-readable memory and executable on one or more processors. A plurality of vehicle operators may be classified (block302) into a driver risk group based on attributes shared by the plurality of vehicle operators. These attributes, or characteristics, may indicate communities of which the vehicle operators are a part. These characteristics may include, for instance, location-related characteristics, such as where the vehicle operator lives (e.g., street, neighborhood, city, state, country, etc.) These characteristics may also include workplace-related characteristics, such as, e.g., the vehicle operator's career or field, the employer of the vehicle operator, etc. As another example, these characteristics may include school-related characteristics, such as, e.g., which school the vehicle operator currently attends, the vehicle operator's grade level in school, alumni groups of which the vehicle operator is a part, etc. In some instances, these characteristics may include or hobby-related and/or interest-related characteristics, such as, e.g., sports teams of which the vehicle operators are fans. In some instances, of course, vehicle operators may be grouped into the driver risk group based on any other suitable demographic characteristics. Vehicle sensor data associated with each of the plurality of vehicle operators of the driver risk group may be analyzed (block304). The vehicle sensor data associated with the plurality of vehicle operators may include, for instance, speed data, acceleration data, braking data, cornering data, following distance data, turn signal data, seatbelt use data, location data, date/time data, or any other suitable vehicle sensor data. This vehicle sensor data may be analyzed to determine instances in which the vehicle operators exhibit safe driving behaviors (as opposed to unsafe driving behaviors), and these instances of safe driving behavior may be recorded. For instance, vehicle sensor data indicating that the speed of the vehicle is above a certain threshold speed may indicate an unsafe driving behavior, while vehicle sensor data indicating that the speed of the vehicle is below that speed may indicate a safe driving behavior. Similarly, for example, acceleration at a rate above a certain threshold rate may indicate an unsafe driving behavior, while acceleration below that threshold rate may indicate a safe driving behavior. As another example, braking data may be analyzed to determine instances of “hard” versus “soft” braking, with hard braking indicating an unsafe driving behavior while soft braking indicates a safe driving behavior. In some instances, multiple types of vehicle sensor data may be combined to determine indications of safe and unsafe driving behavior. For example, location data may be combined with speed data to determine whether a vehicle operator is exceeding local speed limits (an unsafe driving behavior) or following them (a safe driving behavior). As another example, seatbelt use data may be combined with speed data to determine whether the vehicle operator is using a seatbelt while the vehicle is in motion (a safe driving behavior). Based on the analysis of the vehicle sensor data, one or more indicia of safe driving behavior associated with driver risk group may be identified (block306). In some examples, the indication of the safe driving behavior of the driver risk group may be a score or rating associated with the driver risk group (e.g., a score of 70 out of 100, A+, four out of five stars, etc.) The score or rating may be based on an overall assessment of the safe driving behaviors associated with the individual vehicle operators of the group, which may be, for instance, averaged or weighted in a number of different ways. Additionally or alternatively, the indication of the safe driving behavior of the driver risk group may describe specific safe driving behaviors at which the driver risk group excels (e.g., great at following speed limits, always wearing seat belts, etc.) Moreover, the indication of safe driving behavior associated with the driver risk group may be a combination of a score and a description, e.g., Neighborhood X receives an A+ in safe braking, an A− at safe cornering, B+ at following speed limits, etc. The identified indication of safe driving behavior associated with the driver risk group may be provided to vehicle operators who are part of the driver risk group, e.g., via a user interface display (as shown inFIG.2A). Accordingly, when notified and/or updated of the driver risk group's score, rating, or description, vehicle operators may be motivated to drive more safely in order to improve the safety of the community, or in order to maintain the community's existing high standards for safe driving. In some instances, the indication of safe driving behavior associated with the driver risk group as a whole may be achieved based on a number or a percentage of vehicle operators of the driver risk group who are associated with vehicle sensor data indicative of a safe driving behavior (e.g., 90% of drivers from a certain school have exhibited safe cornering behaviors within the past 10 days) and/or a frequency with which vehicle operators of the driver risk group are associated with vehicle sensor data indicative of safe driving behavior (e.g., drivers from a certain workplace use their seat belts every time they operate a vehicle). In some instances, the indication of safe driving behavior associated with the driver risk group may be comparative. For example, the vehicle sensor data associated with a first driver risk group may be compared to vehicle sensor data associated with a second driver risk group. For instance, a driver risk group including drivers from one city may be compared to a driver risk group including drivers from another city to determine which group more consistently maintains speed limits. In some examples, the indication of safe driving behavior associated with the driver risk group may be based on the driver risk group's completion of a challenge. The challenge may include criteria for completion (e.g., 90% of drivers in the driver risk group must wear their seat belts every day in May). In some instances, the challenge may be a comparative challenge between vehicle operators of one driver risk group and vehicle operators of another driver risk group (e.g., which driver risk group can reach 1000 trips with safe cornering first, the White Sox fan community or the Cubs fan community?). Results from a comparative and/or challenge-based determination of safe driving behavior may be displayed on a leaderboard or otherwise presented to vehicle operators, e.g., via a user interface display as shown inFIG.2B, to encourage safe driving via friendly competition between communities. In response to a third-party query regarding a vehicle operator in the driver risk group, one or more indicia of safe driving behavior associated with the driver risk group may be provided to the third party (block308), e.g., via a user interface display as shown inFIG.2C. The third-party query may originate from a business of which the vehicle operator is a customer or a potential customer. For instance, a vehicle rental service from which the vehicle operator wishes to rent a vehicle may request access to indications of safe driving behavior associated with the vehicle operator's driver risk group. As another example, an insurance company may request to access to indications of safe driving behavior associated with the vehicle operator's driver risk group. As still another example, the third-party query may originate from an employer or potential employer of the vehicle operator. For example, a potential employer considering the vehicle operator for a position involving driving, e.g., in the field of trucking, delivery, taxi or limo services, ridesharing services, etc., may request access to indications of safe driving behavior associated with the vehicle operator's driver risk group. In some examples, the third party may provide rewards, incentives, discounts, and/or access to certain selective programs or events to the vehicle operator based on the safe driving behavior associated with the vehicle operator's driver risk group. For example, the vehicle operator may gain access to a selective program, e.g., an opportunity to rent new or rare vehicles, based on the safe driving behavior associated with the vehicle operator's driver risk group. With the foregoing, an insurance customer may opt-in to a rewards, insurance discount, or other type of program. After the insurance customer provides their affirmative consent, an insurance provider remote server may collect data from the customer's mobile device, smart home controller, or other smart devices—such as with the customer's permission or affirmative consent. The data collected may be related to insured assets before (and/or after) an insurance-related event, including those events discussed elsewhere herein. In return, risk averse insureds may receive discounts or insurance cost savings related to home, renters, personal articles, auto, and other types of insurance from the insurance provider. In one aspect, data, including the types of data discussed elsewhere herein, may be collected or received by an insurance provider remote server, such as via direct or indirect wireless communication or data transmission from a smart home controller, mobile device, or other customer computing device, after a customer affirmatively consents or otherwise opts-in to an insurance discount, reward, or other program. The insurance provider may then analyze the data received with the customer's permission to provide benefits to the customer. As a result, risk averse customers may receive insurance discounts or other insurance cost savings based upon data that reflects low risk behavior and/or technology that mitigates or prevents risk to (i) insured assets, such as homes, personal belongings, or vehicles, and/or (ii) home or apartment occupants. Although the foregoing text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the invention may be defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a non-transitory, machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that may be permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it may be communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within an office environment, or as a server farm), while in other embodiments the processors may be distributed across a number of locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. As used herein, the terms “comprises,” “comprising,” “may include,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also may include the plural unless it is obvious that it is meant otherwise. This detailed description is to be construed as examples and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. The patent claims at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the claim(s). The systems and methods described herein are directed to an improvement to computer functionality, and improve the functioning of conventional computers. | 37,705 |
11858520 | The drawings show diagrammatic exemplifying embodiments of the present invention and are thus not necessarily drawn to scale. It shall be understood that the embodiments shown and described are exemplifying and that the invention is not limited to these embodiments. It shall also be noted that some details in the drawings may be exaggerated in order to better describe and illustrate the invention. Like reference characters refer to like elements throughout the description, unless expressed otherwise. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE INVENTION FIG.1shows a flowchart of an example embodiment of the method of the present invention andFIGS.2and3show a vehicle combination100from above where the connected trailer10is positioned in two different positions. The vehicle combination100is in this example a so called Nordic combination comprising a towing vehicle1, a trailer10and a dolly unit9connecting the trailer10and the towing vehicle1. The vehicle combination comprises two articulation joints, A1and A2. The towing vehicle1, which here is a heavy-duty truck, comprises a cab6and a load space7. On one of the sides is a sensor2positioned for identifying wheels of the trailer10and the dolly unit9. The side sensor2is here a RADAR sensor. This sensor may also advantageously be used for other tasks, such as for blind spot detection. As already mentioned in the above, the dolly unit9may be regarded as a separate trailer or as an integrated part of the trailer10. The towing vehicle comprises front wheels3and rear wheels,4and5, and the trailer10comprises two rear wheels12,13one its left-hand side, as seen in the vehicle combination's forward driving direction. Further, the dolly unit9comprises one wheel11on its left-hand side. The flow chart inFIG.1shows a method for estimating a wheel base length D of at least one trailer10of a vehicle combination100comprising a towing vehicle1, such as the vehicle combination100shown inFIGS.2and3. The method comprises the following steps:a step S1of performing a plurality of wheel identification measurements on at least one side of the at least one trailer10, by means of the at least one sensor2during use of the vehicle combination100,a step S2of determining a number of identifiable active wheels on the at least one side of the at least one trailer10in each one of the plurality of wheel identification measurements,a step S3of determining a total number of active wheels on the at least one side of the at least one trailer10, wherein the total number of active wheels is determined based on at least one of the plurality of wheel identification measurements in which a maximum number of identifiable active wheels was determined,a step S4of determining a position of each identifiable active wheel at least from the at least one of the plurality of wheel identification measurements in which the maximum number of active wheels was determined, anda step S5of estimating the wheel base length D based on the determined position of each active wheel. With reference to especiallyFIGS.2and3, the towing vehicle1further comprises a control unit8which is connected to the sensor2. The control unit8may be any kind of control unit of the towing vehicle1, such as for example an ECU (Electronic Control Unit) which also may be configured for performing other control functions. The control unit8may comprise a processing unit and a memory unit which carries a computer program which comprises program code means for performing the steps of any of the embodiments of the first aspect of the invention. The sensor2may be connected to the control unit by an electrical wire and/or by a wireless connection. The communication between the sensor2and the control unit may for example be performed by a CAN bus system, Bluetooth, WiFi or by any other known communication system. The vehicle combination100inFIG.2is positioned such that the sensor2is able to identify one active wheel11during forward driving, whilst the wheels12and13of the trailer10are occluded for the sensor2. Thereby, if the sensor performs a wheel identification measurement on the left-hand side of the trailer10at this occasion, it will be able to identify the wheel11only. This specific position of the vehicle combination100may be called a “z-configuration”, where the letter “z” refers to the relative orientation of the different parts (towing vehicle, dolly unit and trailer) of the vehicle combination100relative each other. The performed wheel identification measurement may be regarded as one separate frame or sample which is based on the information generated from the sensor2. This frame or sample will thus provide information that there is one wheel on the side of the trailer. When the vehicle combination100continues to move forward, it may eventually end up in the position as shown inFIG.3. In this position, all wheels,11,12and13on the left-hand side of the trailer10and the dolly unit9can be identified by the sensor2. The sensor2may then perform a second wheel identification measurement at this occasion, thereby providing a second frame or sample, in which three active wheels,11,12and13, are identified. From the above two measurements, i.e. the two frames or samples, which also may be referred to as step S1and S2inFIG.1, a total number of active wheels on the at least one side of the at least one trailer can be determined. This is done in that the total number of active wheels is determined based on the wheel identification measurement in which three identifiable active wheels was identified, i.e. a maximum number from the two measurements. This step may be referred to as step S3inFIG.1. From the frame or sample where the three wheels were identified, a position of each active wheel,11,12and13, can be determined. The position may for example be provided by identifying each wheel's Doppler profile, as will be further described with reference toFIG.4. This part of the method may be referred to as step S4inFIG.1. Further, each position is preferably determined with respect to a reference point, preferably a reference point on the towing vehicle1. The position of each wheel may for example be defined in a coordinate system, such as a Cartesian coordinate system, for example in a two or three-dimensional space by an x-y plane or x-y-z space. Based on the determined position of each active wheel,11,12and13, the trailer's wheel base length D may be estimated. This part of the method may be referred to as step S5inFIG.1. The estimated wheel base D is here the effective wheel base of the trailer10. As can be seen, the effective wheel base length D extends from the articulation joint A1to the position located midway between the wheels12and13. To further improve the wheel base estimation, at least one sensor may be provided for detecting an angle of at least one of the articulation joints of the vehicle combination. In a preferred embodiment, all articulation angles at the joints A1and A2are known by information provided from one or more sensors. The one or more sensors may for example be additional sensors provided at the rear end of the towing vehicle, such as an ultrasound sensor or the like. Further, there may be sensor(s) provided at the articulation joints which are adapted to measure the current articulation angle. Still optionally, a dynamic vehicle model may be used to further improve the wheel base estimation. Just as a matter of example, at least one of a dynamic vehicle model, measured articulation angles, yaw rate, GNSS (Global Navigation Satellite System) position and heading and wheel position, and the wheel base length estimation, as estimated herein, can be combined in a standard Kalman filter-type calculation to provide a further improved wheel base estimate. Kalman filters are well-known for the skilled person, and are for example explained in the book “Beyond the Kalman filter, particle filters for tracking applications”, [Branco Ristic, Sanjeev Arulampalam, and Neil Gordon, Artech House, Boston, London 2004]. Furthermore, the estimation may be further improved by also knowing the position of each coupling point, in this embodiment the articulation joints A1and A2. For example, in a Nordic combination vehicle with no communication connection between truck and dolly, the second coupling point is often well approximated as positioned close to the centre of the dolly wheel axles. For an A double combination with only the first and the second trailers connected to the truck via a communication link, the coupling points may be communicated and known at the truck. FIG.4shows a side view of one active wheel, here exemplified with the wheel11fromFIGS.2and3. The wheel11is active, i.e. it is rotating around a wheel axle (not shown) about a center point C. A wheel identification sensor, preferably a RADAR or LIDAR sensor, is able to detect the wheel's velocity profile, which may be defined as a Doppler profile which identifies a velocity v at an outer peripheral end of the active wheel and a velocity −v at a diametrically opposite outer end of the active wheel. The velocity varies linearly between the two outer ends. The two outer ends are in this embodiment located substantially at the top and bottom position of the wheel11. By the Doppler profile, the position of the wheel11, which is defined as the wheel's center point C may be determined in a reliable manner. The position C is the point in the Doppler profile where the velocity is zero. Hence, each active wheel may be identified by the sensor in that the wheels are rotating. Furthermore, wheels which are not in use, and which also are not in contact with ground, may not be identified. These wheels are preferably not identified since they will not affect the trailer's effective wheel base. Once such a wheel is activated, i.e. in contact with ground, it may be identified by its Doppler profile. Therefore, in view of the above, the invention provides an efficient and flexible method for identifying an effective wheel base length for the trailer, which may change over time. FIG.5shows a side view of a vehicle combination100′ comprising a towing truck1and a trailer10′ connected thereto via a dolly unit9′. The towing truck1is similar to the truck as shown inFIGS.2and3, i.e. it comprises three wheels3,4and5on its left-hand side, a sensor2positioned on the same side, a control unit8and an articulation joint (coupling point) A2. It also has a cab6for the driver and a load space7. The method for identifying the wheel base length D′ of the trailer10′ may be performed in a similar manner as explained in the above. The dolly unit9′ in this embodiment has two wheels14and15on its left-hand side, with respect to the forward driving direction of the vehicle combination100′. Further, an articulation joint A1is placed substantially midway between the two wheels, as seen from the side of the vehicle combination1. Hence, the location of the articulation joint A1may be determined by knowing the position of the two wheels14and and estimating that the articulation point is located therebetween, such as midway between the wheels. The wheels14,15,16,17and18may also be grouped into different wheel groups. In this example the wheel groups are preferably a first wheel group of the dolly9′, including the wheels14and15, and a second wheel group of the trailer10′, including the wheels16,17and18. The grouping may be performed by determining the relative distance between the different wheels. This is preferably done by use of the determined positions from each wheel, which has been determined by use of the sensor2. For example, from the determined positions, a distance d1between the wheels15and16and a distance d2between the wheels16and17may be determined. Therefrom, it may be concluded that the wheels16and17belong to one wheel group and the wheel15to another wheel group, since the distance d1is substantially larger than the distance d2. This procedure may be performed for all the wheels, where relative distances between the different wheels are determined based on the determined wheel positions. Furthermore, it may also be determined which wheel belongs to which trailer (or dolly). For example, this may be determined by using one or more of the plurality of measurements made by the sensor2and by determining if the wheel positions can be located along one or several imaginary axles. For example, it may be determined from the determined wheel positions, from one or several measurements, that the wheels14and15can be placed along a first imaginary axle and the wheels16,17and18can be placed along a second imaginary axle, which is pivoting and/or angled with respect to the first axle. This may be an indication of that the wheels14and15are part of one unit, the dolly9′, and the wheels16,17and18of another unit, the trailer10′. This is preferably determined by using the wheel positions from more than one of the measurements where the maximum number of wheels was determined. The embodiments as shown inFIGS.2,3and5show one sensor2on the left-hand side of the towing vehicle1. It shall however be understood that the towing vehicle1preferably comprises two such sensors located on each side thereof, and that the method preferably makes use of measurements from both these sensors, which may further improve the reliability of the measurement and also the time for obtaining a reliable estimation. It is to be understood that the present invention is not limited to the embodiments described above and illustrated in the drawings; rather, the skilled person will recognize that many changes and modifications may be made within the scope of the appended claims. | 13,694 |
11858521 | DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Referring toFIG.1, there is generally illustrated one non-limiting example of a vehicle100(e.g., a fully autonomous vehicle, a semi-autonomous vehicle, a fully manual vehicle, etc.) having a vehicle motion control system102with a computer104that identifies in real-time a degradation of the tire states and the tire prediction models (e.g., inaccurate tire state calculations such as wrong magnitude and/or wrong direction and inaccurate prediction models). These degradations may cause the vehicle100to skid, such that the vehicle100travels along an unintended path106. As described in detail below, the real-time detection of degradation permits online mitigation strategies to adjust tire states and associated degraded estimations. As further described below, the adjusted tire states and prediction models permit the system102to coordinate limit handling of one or more actuators108(e.g., an Electric All Wheel Drive110and/or an Electric Limited Slip Differential112, etc.) to adjust motion of the vehicle100, such that the vehicle100travels along an intended path114without skidding. The system102coordinates limit handling by the actuators108to adjust torque output between front and rear axles116,118of the vehicle100, which in turn provides a maximum lateral grip for the tires120of the vehicle100. The system102includes one or more input devices122for generating one or more input signals associated with data indicative of a motion of the vehicle100. In this non-limiting example, the input devices122include an Inertial Measurement Unit124, a Wheel Angle Sensor126, a Suspension Height Sensor128, a Global Positioning System130, and a Wheel Speed Sensor132. The Inertial Measurement Unit124may be an electronic device that measures and reports a specific force, an angular rate, and/or an orientation of the vehicle100, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. In certain non-limiting examples, the Global Positioning System may be an Inertial-Measurement-Unit-enabled device. The Inertial Measurement Unit device may allow a Global Positioning System receiver to function when GPS-signals are unavailable, when the vehicle100travels within tunnels, inside buildings, or when electronic interference is present. In other non-limiting examples, the system can include any combination of one or more of these input devices or any other suitable input devices. The system102further includes one or more actuators108for adjusting the motion of the vehicle100. As described in detail below, the actuator108in real-time adjusts the motion of the vehicle100, in response to the actuator108receiving an actuation signal from a processor134. In this non-limiting example, the actuators108include a first plane actuator140transmitting a first torque to a front axle116and a second plane actuator142transmitting a second torque to a rear axle of the vehicle118. Also, in this non-limiting example, the actuators108may include the Electric All Wheel Drive110and the Electric Limited Slip Differential112. It is contemplated the system can include other suitable actuators for adjusting the motion of the vehicle. The system102further includes the computer104attached to the vehicle100, with the computer104having one or more processors134communicating with the input devices122(e.g., the Inertial Measurement Unit124, the Wheel Angle Sensor126, the Suspension Height Sensor128, the Global Positioning System130, the Wheel Speed Sensor132, etc.) and the actuators108(e.g., the Electric All Wheel Drive110and/or the Electric Limited Slip Differential112, etc.). The computer104further includes a non-transitory computer readable storage medium136for storing instructions, such that the processor134is programmed to receive the input signals from the input devices122and determine a current tire state and a current tire prediction model. In this non-limiting example, the system102may further include a remote server138wirelessly communicating with the computer104, with the remote server138performing certain functions of the system and/or including software for updating the computer104. The processor134is programmed to compare the current tire state and the current tire prediction model to the data indicative of the motion of the vehicle, in response to the processor134receiving the input signal. More specifically, in this non-limiting example, the processor134is programmed to compare a first sign of the current tire state and a second sign of the current tire lateral force to one another. The processor134is further programmed to determine a degradation in the current tire state and/or the current tire prediction model, in response to the processor134determining that the first and second signs are opposite to one another. The processor134is programmed to calculate in real-time an adjusted tire state and an adjusted tire prediction model based on the data indicative of the motion of the vehicle, in response to the processor134determining that the current tire state and the current tire prediction model are not verified against the data indicative of the motion of the vehicle100(i.e., determining a degradation in the current tire state and/or the current tire prediction model). In this non-limiting example, the processor134is programmed to use an Arbitration logic to calculate the adjusted tire state and the adjusted tire prediction model, in response to the processor134determining that the current tire state and the current tire prediction model are not verified against the data indicative of the motion of the vehicle. Continuing with the previous non-limiting example, the processor134is programmed to determine a increase in a tire slip ratio of the vehicle100, in response to the processor134receiving the input signal from the input device122. The processor134is programmed to determine a increase in a tire lateral force capacity, in response to the processor determining the increase in the tire slip ratio. The processor134is further programmed to determine the degradation in the current tire state and/or the current tire prediction model, in response to the processor determining the decrease in the tire lateral force capacity. The processor134is programmed to calculate in real-time the adjusted tire state and the adjusted tire prediction model, in response to the processor134determining the degradation in the current tire state and/or the current tire prediction model. The processor134is programmed to determine a increase in a current tire normal force, in response to the processor134receiving the input signal from the input device122. The processor134is further programmed to determine the decrease in the tire lateral force capacity, in response to the processor134determining the decrease in the current tire normal force. The processor134is further programmed to determine the degradation in the current tire state and/or the current tire prediction model, in response to the processor134determining the decrease in the tire lateral force capacity. The processor134is further programmed to calculate in real-time the adjusted tire state and the adjusted tire prediction model, in response to the processor134determining the degradation in the current tire state and/or the current tire prediction model. The processor134is programmed to determine offline a decrease in the first torque in the front axle116for a predetermined period of time below a time threshold. The processor134is further programmed to determine offline the degradation in the current tire state and the associated data indicative of the motion of the vehicle100, in response to the processor134determining the decrease in the first torque in the front axle116. The processor134is programmed to calculate in real-time the adjusted tire state and the adjusted tire prediction model, in response to the processor134determining the degradation in the current tire state and/or the current tire prediction model. The processor134is programmed to generate in real-time actuation signal based on the adjusted tire state and the adjusted tire prediction model. In response to actuators108(e.g., the first and second plane actuators140,142) receiving the actuation signal from the processor134(e.g., where the processor determines that the tire state and prediction model have degraded), the first plane actuator140increases the first torque by a predetermined front torque increment and the second plane actuator142deceases the second torque by a predetermined rear torque increment, such that the system102provides the vehicle100with a maximum lateral grip to permit the vehicle to travel along an intended path114without skidding. Without the system providing the real-time correction of degraded tire state and tire prediction models, the vehicle100may skid and travel along the unintended path106. Referring toFIG.2, one non-limiting example of a method200is provided for operating the computer of the system102for the vehicle ofFIG.1. The method200begins at block202with receiving, using the processor134of the computer104, the input signal from the input devices122(e.g., the Inertial Measurement Unit124, the Wheel Angle Sensor126, the Suspension Height Sensor128, the Global Positioning System130, the Wheel Speed Sensor132, etc.). The method200then proceeds to block204. At block204, the method200further includes determining, using the processor134, the current tire state and the current tire prediction model in response to the processor134receiving the input signal from the input devices122. More specifically, in this non-limiting example, the method200includes comparing, using the processor134, a first sign of the current tire state and the second sign of the current tire lateral force to one another. If the processor134does not determine that the first and second signs are opposite to one another, the method200proceeds to block206. If the processor134determines that the first and second signs are opposite to one another, the method returns to block202. At block206, the method200further includes determining, using the processor134, the tire slip ratio of the vehicle100and the tire lateral force capacity, in response to the processor134receiving the input signal from the input device122. If the processor134determines an increase in the tire slip ratio and an increase in the tire lateral force capacity the method200proceeds to block210. If the processor134does not determine the increase in the tire slip ratio and tire lateral force capacity, the method200proceeds to block208. At block208, the method200further includes determining, using the processor134, the tire normal force and the tire force capacity in response to the processor134receiving the input signal from the input device122. If the processor134determines the increase in current tire normal force and decrease in tire force capacity, the method200proceeds to block210. If the processor134does not determine the increase in current tire normal force and decrease in tire force capacity, the processor134returns to block202. At block210, the method200further includes determining, using the processor134, the degradation in the current tire state and/or the current tire prediction model, in response to the processor134determining that the current tire state and the current tire prediction model are not verified against the data indicative of the motion of the vehicle100(i.e., determining the degradation in the current tire state and/or the current tire prediction model), in response to the processor134determining the increase in the tire lateral force capacity. The method200further includes determining, using the processor134, the degradation in the current tire state and/or the current tire prediction model, in response to the processor134determining that the first and second signs are opposite to one another and the processor134further determining the increase in the tire lateral force capacity. The method200then proceeds to block212. At block212, the method200further includes calculating in real-time, using the processor134, the adjusted tire state and the adjusted tire prediction model based on the data indicative of the motion of the vehicle, in response to the processor134determining that the current tire state and the current tire prediction model are not verified against the data indicative of the motion of the vehicle100(i.e., determining a degradation in the current tire state and/or the current tire prediction model). In this non-limiting example, the processor134is programmed to use an Arbitration logic to calculate the adjusted tire state and the adjusted tire prediction model, in response to the processor134determining that the current tire state and the current tire prediction model are not verified against the data indicative of the motion of the vehicle. The method200then proceeds to block214. At block214, the method200further includes generating in real-time, using the processor134, the actuation signal based on the adjusted tire state and the adjusted tire prediction model. In response to the actuators108(e.g., the first and second plane actuators140,142) receiving the associated actuation signals from the processor134(e.g., where the processor134determines that the tire state and prediction model have degraded), the first plane actuator140increases the first torque by a predetermined front torque increment, and the second plane actuator142deceases the second torque by a predetermined rear torque increment, such that the method200provides the vehicle100with a maximum lateral grip to permit the vehicle100to travel along an intended path114(e.g., without skidding). Without the method providing the real-time correction of degraded tire state and tire prediction models, the vehicle100may skid and travel along an unintended path106. The method200then proceeds to block216. At block216, the method200further includes adjusting in real-time, using the actuator108, the motion of the vehicle in response to the actuator108receiving the actuation signal from the processor134. In this non-limiting example, in response to the actuators108(e.g., the first and second plane actuators140,142) receiving the actuation signal from the processor134(e.g., where the processor determines that the tire state and prediction model have degraded), the first plane actuator140increases the first torque by a predetermined front torque increment and the second plane actuator142deceases the second torque by a predetermined rear torque increment, such that the method200provides the vehicle100with a maximum lateral grip to permit the vehicle to travel along the intended path114spaced from the unintended path106. The flow chart provided in the present disclosure illustrate operations implemented by the system according to some exemplary embodiments of the present disclosure. It should be understood that the operations shown in the flow charts may be performed in a different order. The operations may be performed in a different order or performed simultaneously. In addition, one or more other operations can be added to the flow charts, and one or more operations can be removed from the flow charts. In the present disclosure, the term “autonomous driving vehicle” may refer to a vehicle that has the ability to perceive its environment, and automatically perceive, judge and make decisions based on the external environment without human (e.g., a driver, a pilot, etc.) input and/or intervention. The terms “autonomous driving vehicle” and “vehicle” can be used interchangeably herein. Moreover, although the system and method provided in the present disclosure mainly describe the vehicle motion control system and method that can be used for autonomous driving, it should be understood that these are only some exemplary embodiments. The system and method of the present disclosure can be applied to any other types of transportation systems. For example, the system and method of the present disclosure may be applied to various transportation systems in different environments, including land, sea, aerospace, etc., or any combination thereof. The autonomous driving vehicles of a transportation system may include, but are not limited to, taxis, private cars, trailers, buses, trains, bullet trains, high-speed railways, subways, ships, airplanes, spacecraft, etc., or any combination thereof. In some exemplary embodiments, the system and method of the present disclosure can find applications in logistics warehouses and military affairs, for example. In general, the computing systems and/or devices described may employ any of a number of computer operating systems, including, but by no means limited to, versions and/or varieties of the ANDROID AUTOMOTIVE OS developed by GOOGLE INC., the MICROSOFT WINDOWS operating system, the UNIX operating system (e.g., the SOLARIS operating system distributed by ORACLE Corporation of Redwood Shores, California), the AIX UNIX operating system distributed by INTERNATIONAL BUSINESS MACHINES of Armonk, New York, the LINUX operating system, the MAC OSX and iOS operating systems distributed by APPLE INC. of Cupertino, California, the BLACKBERRY OS distributed by BLACKBERRY LTD. of Waterloo, Canada, and the OPEN HANDSET ALLIANCE, or the QNX CAR Platform for Infotainment offered by QNX Software Systems. Examples of computing devices include, without limitation, an on board vehicle computer, a computer workstation, a server, a desktop, notebook, laptop, or handheld computer, or some other computing system and/or device. Computers and computing devices generally include computer executable instructions, where the instructions may be executable by one or more computing devices such as those listed above. Computer executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies. Some of these applications may be compiled and executed on a virtual machine. In general, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of computer readable media. A file in a computing device is generally a collection of data stored on a computer readable medium, such as a storage medium, a random-access memory, etc. The non-transitory computer readable medium that participates in providing data (e.g., instructions) may be read by the computer (e.g., by a processor of a computer and may take many forms, including, but not limited to, non-volatile media and volatile media. Non-volatile media may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include, for example, dynamic random-access memory, which typically constitutes a main memory. Such instructions may be transmitted by one or more transmission media, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to a processor of an Engine Control Unit. Databases, data repositories or other data stores described herein may include various kinds of mechanisms for storing, accessing, and retrieving various kinds of data, including a hierarchical database, a set of files in a file system, an application database in a proprietary format, a relational database management system, etc. Each such data store is generally included within a computing device employing a computer operating system, such as one of those mentioned above, and are accessed via a network in any one or more of a variety of manners. A file system may be accessible from a computer operating system and may include files stored in various formats. In some examples, system elements may be implemented as computer readable instructions (e.g., software) on one or more computing devices (e.g., servers, personal computers, etc.), stored on computer readable media associated therewith (e.g., disks, memories, etc.). A computer program product may comprise such instructions stored on computer readable media for carrying out the functions described herein. With regard to the media, processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes may be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps may be performed simultaneously, that other steps may be added, or that certain steps described herein may be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments and should in no way be construed so as to limit the claims. Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the arts discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the invention is capable of modification and variation and is limited only by the following claims. All terms used in the claims are intended to be given their plain and ordinary meanings as understood by those skilled in the art unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary. The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure. | 22,733 |
11858522 | DETAILED DESCRIPTION Hereinafter, some embodiments of the present disclosure will be described in detail with reference to the exemplary drawings. In adding the reference numerals to the components of each drawing, it should be noted that the identical or equivalent component is designated by the identical numeral even when they are displayed on other drawings. Further, in describing the embodiment of the present disclosure, a detailed description of the related known configuration or function will be omitted when it is determined that it interferes with the understanding of the embodiment of the present disclosure. In describing the components of the embodiment according to the present disclosure, terms such as first, second, A, B, (a), (b), and the like may be used. These terms are merely intended to distinguish the components from other components, and the terms do not limit the nature, order or sequence of the components. Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. FIG.1is a block diagram of a device for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. As shown inFIG.1, a device100for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure may include storage10, an input device20, a training device30, and a controller40. In this connection, components may be combined with each other to be implemented as one component, or some components may be omitted based on a scheme for implementing the device100for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure. In particular, the device100may be implemented such that a function of the training device30is performed by the controller40. Looking at each of the components, first, the storage10may store various required logic, algorithms, and programs required in a process of training a first model (an inference model) using first training data composed of behavior data of the vehicle and a steering compensation angle, training a second model (an inference model) using second training data composed of the steering compensation angle, which is an output value of the first model, lateral data, and a failure probability value of each actuator, and determining whether each actuator has failed based on the first model and the second model. In general, deep learning is a process of creating a computer model to identify, e.g., faces in CCTV footage, or product defects on a production line. Inference is the process of taking that model, deploying it onto a device, which will then process incoming data (usually images or video) to look for and identify whatever it has been trained to recognize. The storage10may store the first model (a pre-processing model) and the second model (a main model) whose learning has been completed by the training device30. Such storage10may include at least one type of recording media (storage media) of a memory of a flash memory type, a hard disk type, a micro type, a card type (e.g., a secure digital card (SD card) or an eXtream digital card (XD card)), and the like, and a memory of a random access memory (RAM), a static RAM (SRAM), a read-only memory (ROM), a programmable ROM (PROM), an electrically erasable PROM (EEPROM), a magnetic RAM (MRAM), a magnetic disk, and an optical disk type. The input device20may input behavior data (training data or test data) of the vehicle into a first model31, and input lateral data (training data or test data) into a second model32. In this connection, the behavior data of the vehicle may include a steering angle, a speed (a longitudinal speed) of the vehicle, and a longitudinal acceleration, and may further include a tractor yaw rate and a hitch angle when the vehicle is a tractor trailer. In this connection, the hitch angle means an angle between the tractor and the trailer. In addition, the lateral data may include a steering compensation angle δaffect, a lateral acceleration αlateral, and data lateralerroron a lateral error compared to a travel route of the vehicle. The training device30may train the first model (the inference model) using the first training data composed of the behavior data of the vehicle and the steering compensation angle, and may train the second model (the inference model) using the second training data composed of the steering compensation angle, which is the output value of the first model, the lateral data, and the failure probability value of each actuator. The controller40performs overall control such that the respective components may normally perform functions thereof. Such controller40may be implemented in a form of hardware, software, or a combination of the hardware and the software. The controller40may be implemented as a microprocessor or an electronic control unit, but may not be limited thereto. In particular, the controller40may control the training device30to train the first model31using the first training data composed of the behavior data of the vehicle and the steering compensation angle, and train the second model32using the second training data composed of the steering compensation angle, which is the output value of the first model, the lateral data, and the failure probability value of each actuator. The controller40may determine whether each actuator in the vehicle has failed based on the first model31and the second model32. That is, the controller40may detect the failure of each actuator in the vehicle. When the failure occurs in at least one actuator in the vehicle, the controller40may alert a driver. In this connection, when the vehicle is an autonomous vehicle, the controller40may request an autonomous driving system to perform redundancy travel. The controller40may acquire travel route information of the vehicle in association with a navigation system (not shown) included in the vehicle. The controller40may detect the lateral error compared to the travel route of the vehicle based on information acquired from various sensors (a lidar sensor, a radar sensor, a camera, and the like) included in the vehicle. That is, the controller40may generate the data on the lateral error compared to the travel route of the vehicle. The controller40may acquire the behavior data and the lateral data of the vehicle from the various sensors included in the vehicle. The controller40may acquire the behavior data of the vehicle through a vehicle network. In this connection, the vehicle network may include a controller area network (CAN), a controller area network with flexible data-rate (CAN FD), a local interconnect network (LIN), a FlexRay, a media oriented systems transport (MOST), an Ethernet, and the like. FIG.2is a detailed block diagram of a training device included in a device for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. As shown inFIG.2, the training device30included in the device for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure may include the first model31and the second model32. The first model31may be implemented as a fully connected neural network (FCNN) as the preprocessing model, but is also able to be implemented as a convolution neural network (CNN) or a GoogleNet. Such first model31is the inference model, which may perform the learning by receiving the first training data composed of the behavior data of the vehicle and the steering compensation angle corresponding thereto from the input device20. In this connection, the first model31may perform the learning in a supervised learning scheme. In addition, when the learning is completed and applied to the vehicle, the first model31may receive the behavior data of the vehicle from the input device20and output an optimal steering compensation angle. For reference, because the tractor trailer has a form in which a towing vehicle (the tractor) and a towed vehicle (the trailer) are connected to each other, a change in dynamics of the towing vehicle affects the towed vehicle. Therefore, as shown inFIG.3, the first model31to which all neurons are connected is suitable for analyzing dynamics elements of a target vehicle and finding a correct value. The second model32may be implemented as a recurrent neural network (RNN) as the main model, but is also able to be implemented as a long short-term memory (LSTM). Such second model32is the inference model, which may perform the learning based on second training data composed of the steering compensation angle, which is the output value of the first model31, the lateral data, and the failure probability value of each actuator. In this connection, the second model32may perform learning in an unsupervised learning scheme. In addition, when the learning is completed and applied to the vehicle, the second model32may receive the steering compensation angle, which is the output value of the first model31, and the lateral data from the input device20, and output the failure probability value of each actuator. For reference, because data used to detect the failure of the actuator is sequence data, the second model32capable of processing the sequence data as shown inFIG.4is suitable. FIG.3is a detailed structural diagram of a first model included in a training device of a device for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. As shown inFIG.3, the first model31included in the training device30of the device for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure may include an input layer that receives at least one of the steering angle, the speed (the longitudinal speed) of the vehicle, and the longitudinal acceleration, the tractor yaw rate, and/or the hitch angle, a hidden layer that processes a linear combination of variable values transmitted from the input layer as a nonlinear function, and an output layer that outputs the steering compensation angle δoffsetas the result of processing of the hidden layer. FIG.4is a detailed structural diagram of a second model included in a training device of a device for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. As shown inFIG.4, the second model32included in the training device30of the device for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure may include an input layer that receives at least one of the steering compensation angle δoffset, which is the output of the first model31, the lateral acceleration αlateralof the vehicle as the lateral data, and/or the data lateralerroron the lateral error compared to the travel route of the vehicle, a hidden layer that processes a linear combination of variable values transmitted from the input layer as a nonlinear function, and an output layer that outputs the failure probability value of each actuator of the vehicle as the result of processing of the hidden layer. InFIG.4, each actuator may include at least one of a failure probability value Psteering errorof a steering actuator, a failure probability value Pacc errorof a driving actuator, and/or a failure probability value Pbrake errorof a braking actuator. In this connection, the steering actuator may include a steering device, the driving actuator may include an engine, a motor, and the like, and the braking actuator may include an anti lock brake system (ABS), an emergency braking system, a pneumatic braking device (an air brake system), and the like. FIG.5is a flowchart for a method for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. First, the training device30trains the first model using the first training data composed of the behavior data of the vehicle and the steering compensation angle (501). Thereafter, the training device30trains the second model using the second training data composed of the steering compensation angle, which is the output value of the first model, the lateral data, and the failure probability value of the actuator (502). Thereafter, the controller40detects the failure of the actuator in the vehicle based on the first model and the second model (503). FIG.6is a block diagram illustrating a computing system for executing a method for detecting a failure of an actuator of a vehicle according to an embodiment of the present disclosure. Referring toFIG.6, the method for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure described above may also be implemented through a computing system. A computing system1000may include at least one processor1100, a memory1300, a user interface input device1400, a user interface output device1500, storage1600, and a network interface1700connected via a bus1200. The processor1100may be a central processing unit (CPU) or a semiconductor device that performs processing on commands stored in the memory1300and/or the storage1600. The memory1300and the storage1600may include various types of volatile or non-volatile storage media. For example, the memory1300may include a ROM (Read Only Memory)1310and a RAM (Random Access Memory)1320. Thus, the operations of the method or the algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware or a software module executed by the processor1100, or in a combination thereof. The software module may reside on a storage medium (that is, the memory1300and/or the storage1600) such as a RAM, a flash memory, a ROM, an EPROM, an EEPROM, a register, a hard disk, a solid state driver (SSD), a removable disk, and a CD-ROM. The exemplary storage medium is coupled to the processor1100, which may read information from, and write information to, the storage medium. In another method, the storage medium may be integral with the processor1100. The processor and the storage medium may reside within an application specific integrated circuit (ASIC). The ASIC may reside within the user terminal. In another method, the processor and the storage medium may reside as individual components in the user terminal. The description above is merely illustrative of the technical idea of the present disclosure, and various modifications and changes may be made by those skilled in the art without departing from the essential characteristics of the present disclosure. Therefore, the embodiments disclosed in the present disclosure are not intended to limit the technical idea of the present disclosure but to illustrate the present disclosure, and the scope of the technical idea of the present disclosure is not limited by the embodiments. The scope of the present disclosure should be construed as being covered by the scope of the appended claims, and all technical ideas falling within the scope of the claims should be construed as being included in the scope of the present disclosure. The device and the method for detecting the failure of the actuator of the vehicle according to an embodiment of the present disclosure as described above may detect the failure of each actuator in the vehicle rapidly and accurately without the complicated calculation process by training the first model using the first training data composed of the behavior data of the vehicle and the steering compensation angle, training the second model using the second training data composed of the steering compensation angle, which is the output value of the first model, the lateral data, and the failure probability value of each actuator, and determining whether each actuator has failed based on the first model and the second model. Hereinabove, although the present disclosure has been described with reference to exemplary embodiments and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims. | 16,682 |
11858523 | DESCRIPTION OF EMBODIMENTS An exemplary embodiment will be described hereinafter in detail with reference to the drawings. A “traveling device” described later in this embodiment refers to devices such as actuators and sensors that are controlled when a vehicle1is travelling. FIG.1schematically illustrates a configuration of the vehicle1that is controlled by a vehicle travel control device100(hereinafter referred to as a travel control device100, as illustrated inFIG.4and the travelling device ofFIG.5) according to this embodiment. The vehicle1is an automobile capable of operating in manual driving of traveling in accordance with an operation of, for example, an accelerator by a driver, assisted driving of traveling while assisting an operation by the driver, and automated driving of traveling without an operation by the driver. The vehicle1includes an engine10serving as a driving source and including a plurality of (four in this embodiment) cylinders11, a transmission20coupled to the engine10, a braking device30that brakes rotation of front wheels50as driving wheel, and a steering device40that steers the front wheels50as steered wheels. The engine10is, for example, a gasoline engine. As illustrated inFIG.2, each of the cylinders11of the engine10is provided with an injector12for supplying fuel into the cylinder11and an ignition plug13for igniting an air-fuel mixture of fuel and intake air supplied into the cylinder11. The engine10includes, for each of the cylinders11, an intake valve14, an exhaust valve15, and a valve mechanism16for adjusting opening/closing operations of the intake valve14and the exhaust valve15. The engine10also includes a piston17that reciprocates in the cylinders11, and a crankshaft18coupled to the piston17through a connecting rod. The engine10may be a diesel engine. In the case where the engine10is the diesel engine, the ignition plug13may not be provided. The injector12, the ignition plug13, and the valve mechanism16are examples of power train-related devices. The transmission20is, for example, a multistep automatic transmission. The transmission20is disposed at one side of the cylinder line of the engine10. The transmission20includes an input shaft (not shown) coupled to the crankshaft18of the engine10and an output shaft (not shown) coupled to the input shaft through a plurality of speed-reducing gears (not shown). The output shaft is coupled to an axle51of the front wheels50. Rotation of the crankshaft18is subjected to a gear shift by the transmission20, and is transferred to the front wheels50. The transmission20is an example of a power train-related device. The engine10and the transmission20are power train devices that generate a driving force for enabling the vehicle1to travel. Actuation of the engine10and the transmission20is controlled by a power train electric control unit (ECU)200, which includes programmable circuitry to execute power train related calculations and output control signals that control an operation of the power train. As used herein, the term “circuitry” may be one or more circuits that optionally include programmable circuitry. For example, while the vehicle1is in the manual driving, the power train ECU200controls, for example, a fuel injection amount and a fuel injection timing by the injector12, an ignition timing by the ignition plug13, and valve open timings and valve open periods of the intake and exhaust valves14and15by the valve mechanism16, based on detection values of, for example, an accelerator opening sensor SW1for detecting an accelerator opening corresponding to a manipulated variable of an accelerator pedal by the driver. While the vehicle1is in the manual driving, the power train ECU200adjusts a gear stage of the transmission20based on a detection result of a shift sensor SW2for detecting an operation of a shift lever by the driver and a required driving force calculated from an accelerator opening. While the vehicle1is in the assisted driving or the automated driving, the power train ECU200basically calculates controlled variables of traveling devices (e.g., the injector12in this embodiment) such that a target driving force calculated by a computation device110described later can be obtained, and outputs a control signal to the traveling devices. The power train ECU200is an example of a device controller, or device control circuitry. The braking device30includes a brake pedal31, a brake actuator33, a booster34connected to the brake actuator33, a master cylinder35connected to the booster34, a dynamic stability control (DSC) device36(or DSC circuitry) for adjusting a braking force, and brake pads37for actually braking rotation of the front wheels50. The axle51of the front wheels50is provided with disc rotors52. The braking device30is an electric brake, and actuates the brake actuator33in accordance with a manipulated variable of the brake pedal31detected by a brake sensor SW3, and actuates the brake pads37through the booster34and the master cylinder35. The braking device30causes the disc rotors52to be sandwiched by the brake pads37and brakes rotation of the front wheels50by a friction force occurring between the brake pads37and the disc rotors52. The brake actuator33and the DSC device36are examples of brake-related devices. Actuation of the braking device30is controlled by a brake microcomputer300and a DSC microcomputer400, also referred to as brake control circuitry and DSC circuitry, for example. For example, while the vehicle1is in the manual driving, the brake microcomputer300controls a manipulated variable of the brake actuator33based on detection values of, for example, the brake sensor SW3for detecting a manipulated variable of the brake pedal31by the driver. The DSC microcomputer400controls actuation of the DSC device36irrespective of operation of the brake pedal31by the driver, and applies a braking force to the front wheels50. While the vehicle1is in the assisted driving or the automated driving, the brake microcomputer300basically calculates controlled variables of traveling devices (e.g., the brake actuator33in this embodiment) such that a target braking force calculated by the computation device110described later can be obtained, and outputs control signals to the traveling devices. The brake microcomputer300and the DSC microcomputer400are examples of a device controller. The brake microcomputer300and the DSC microcomputer400may be constituted by one microcomputer. The steering device40includes a steering wheel41that is operated by the driver, an electronic power assist steering (EPAS) device42(or EPAS circuitry, such as a microcomputer) for assisting a steering operation by the driver, and a pinion shaft43coupled to the EPAS device42. The EPAS device42includes an electric motor42aand a speed reducer42bthat reduces the speed of a driving force of the electric motor42aand transfers the resulting driving force to the pinion shaft43. The steering device40is a steer-by-wire steering device, and actuates the EPAS device42in accordance with a manipulated variable of the steering wheel41detected by a steering angle sensor SW4, and operates the front wheels50by rotating the pinion shaft43. The pinion shaft43and the front wheels50are coupled to each other by an unillustrated rack bar, and rotation of the pinion shaft43is transferred to the front wheels through the rack bar. The EPAS device42is an example of a steering-related device. Actuation of the steering device40is controlled by the EPAS microcomputer500. For example, while the vehicle1is in the manual driving, the EPAS microcomputer500controls a manipulated variable of the electric motor42abased on detection values of, for example, the steering angle sensor SW4. While the vehicle1is in the assisted driving or the automated driving, the EPAS microcomputer500basically calculates controlled variables of traveling devices (e.g., the EPAS device42in this embodiment) such that a target steering variable calculated by the computation device110described later can be obtained, and outputs control signals to the traveling devices. The EPAS microcomputer500is an example of a device controller. Although specifically described later, in this embodiment, the power train ECU200, the brake microcomputer300, the DSC microcomputer400, and the EPAS microcomputer500are configured to be communicable with one another. In the following description, the power train ECU200, the brake microcomputer300, the DSC microcomputer400, and the EPAS microcomputer500will be simply referred to as device controllers, or device control circuitry. In this embodiment, to enable the assisted driving and the automated driving, the travel control device100includes a computation device110(FIGS.3and4) that calculates a route on which the vehicle1is to travel and that determines a motion of the vehicle1for following the route. The computation device110is a computation hardware including one or more chips. Specifically, as illustrated inFIG.3, the computation device110includes a memory and a processor including a CPU and a plurality of memory modules (compartmentalized memory that holds different computer code that is readably and executable by the processor). FIG.4illustrates a configuration for implementing a function (a route generating function described later) according to this embodiment in further detail.FIG.4does not illustrate all the functions of the computation device110. The computation device110determines a target motion of the vehicle1and controls actuation of a device base on an output from, for example, a plurality of sensors. The sensors for outputting information to the computation device110, for example, include: a plurality of cameras70disposed on, for example, the body of the vehicle1and used for taking images of vehicle outdoor environments; a plurality of radars71disposed on, for example, the body of the vehicle1and used for detecting an object outside the vehicle and other objects; a position sensor SW5for detecting a position of the vehicle1(vehicle position information) by utilizing global positioning system (GPS); a vehicle state sensor SW6constituted by outputs of sensors for detecting a vehicle behavior, such as a vehicle speed sensor, an acceleration sensor, and a yaw rate sensor, and used for acquiring a state of the vehicle1; and an occupant state sensor SW7constituted by, for example, an in-car camera and used for acquiring a state of an occupant of the vehicle1. The computation device110receives communication information received by a vehicle outside communicator72and sent from another vehicle around the own vehicle, and traffic information received by the vehicle outside communicator72and sent from a navigation system. Each of the cameras70is disposed to capture an image around the vehicle1by 360° horizontally. Each camera70captures an optical image representing vehicle outdoor environments and generates image data. Each camera70outputs the generated image data to the computation device110. The cameras70are examples of an image acquirer that acquires information on vehicle outdoor environments. The image data acquired by the cameras70is input to a human machine interface (HMI) unit700as well as the computation device110. The HMI unit700displays information based on the acquired image data on, for example, a display device. In a manner similar to the cameras70, each of the radars71is disposed to detect an image in a range around the vehicle1by 360° horizontally. The radars71are not limited to a specific type, and a millimeter wave radar or an infrared ray radar may be employed. The radars71are an example of the image acquirer that acquires vehicle outdoor environments. While the vehicle1is in the assisted driving or the automated driving, the computation device110sets a travel route of the vehicle1and sets a target motion of the vehicle1such that the vehicle1follows the travel route. The computation device110includes: an vehicle outdoor environment identifier111(or vehicle outdoor environment identifier circuity) that identifies a vehicle outdoor environment based on an output from, for example, the cameras70in order to set a target motion of the vehicle1; a candidate route generator112(or candidate route generation circuitry112) that calculates one or more candidate routes on which the vehicle1is capable of traveling, in accordance with the vehicle outdoor environment determined by the vehicle outdoor environment identifier111(or vehicle outdoor environment identification circuitry); a vehicle behavior estimator113(or vehicle behavior estimation circuitry) that estimates a behavior of the vehicle1based on an output from the vehicle state sensor SW6; an occupant behavior estimator114(or occupant behavior estimation circuitry) that estimates a behavior or an occupant of the vehicle1based on an output from the occupant state sensor SW7; a route determiner115(or route determination circuitry) that determines a route on which the vehicle1is to travel; a vehicle motion determiner116(vehicle motion determination circuitry) that determines a target motion of the vehicle1in order to allow the vehicle1to follow the route set by the route determiner115; and a driving force calculator117(driving force calculation circuitry), a braking force calculator118(brake force calculation circuitry), and a steering variable calculator119(or steering variable calculation circuitry) that calculates target physical quantities (e.g., a driving force, a braking force, and a steering angle) to be generated by the traveling devices in order to obtain the target motion determined by the driving force calculator vehicle motion determiner116. The candidate route generator112, the vehicle behavior estimator113, the occupant behavior estimator114, and the route determiner115constitute a route setter that sets a route on which the vehicle1is to travel, in accordance with the vehicle outdoor environment identified by the vehicle outdoor environment identifier111. As will be described in more detail below, the computation device110includes the vehicle outdoor environment identifier111(aspects of which are further described in U.S. application Ser. No. 17/120,292 filed Dec. 14, 2020, and U.S. application Ser. No. 17/160,426 filed Jan. 28, 2021, the entire contents of each of which being incorporated herein by reference), an occupant behavior estimator114(aspects of which are further described in U.S. application Ser. No. 17/103,990 filed Nov. 25, 2020, the entire contents of which being incorporated herein by reference), a route determiner115(aspects of which are further described in more detail in U.S. application Ser. No. 17/161,691, filed 29 Jan. 2021, U.S. application Ser. No. 17/161,686, filed 29 Jan. 2021, and U.S. application Ser. No. 17/161,683, the entire contents of each of which being incorporated herein by reference), a vehicle motion determiner116(aspects of which are further described in more detail in U.S. application Ser. No. 17/159,178, filed Jan. 27, 2021, the entire contents of which being incorporated herein by reference), a candidate route generator112(aspects of which are further described in more detail in U.S. application Ser. No. 17/159,178, supra). The computation device110includes other features as well, as will be discussed herein. The computation device110also includes, as safety functions, a rule-based route generator120(or rule-based route generation circuitry) that identifies an object outside the vehicle according to a predetermined rule and generates a travel route that avoids the object, and a backup130that generates a travel route for guiding the vehicle1to a safe area such as a road shoulder. The vehicle outdoor environment identifier111, the candidate route generator112, the vehicle behavior estimator113, the occupant behavior estimator114, the route determiner115, the vehicle motion determiner116, the driving force calculator117, the braking force calculator118, the steering variable calculator119, the rule-based route generator120, and the backup130are examples of modules stored in a memory102, include the computer readable code that when executed by a processor provide the processor with the structure to perform the relevant functions FIG.7illustrates a block diagram of a computer that may implement the various embodiments described herein. The present disclosure may be embodied as a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium on which computer readable program instructions are recorded that may cause one or more processors to carry out aspects of the embodiment. The computer readable storage medium may be a tangible device that can store instructions for use by an instruction execution device (processor). The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any appropriate combination of these devices. A non-exhaustive list of more specific examples of the computer readable storage medium includes each of the following (and appropriate combinations): flexible disk, hard disk, solid-state drive (SSD), random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash), static random access memory (SRAM), compact disc (CD or CD-ROM), digital versatile disk (DVD) and memory card or stick. A computer readable storage medium, as used in this disclosure, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described in this disclosure can be downloaded to an appropriate computing or processing device from a computer readable storage medium or to an external computer or external storage device via a global network (i.e., the Internet), a local area network, a wide area network and/or a wireless network. The network may include copper transmission wires, optical communication fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing or processing device may receive computer readable program instructions from the network and forward the computer readable program instructions for storage in a computer readable storage medium within the computing or processing device. Computer readable program instructions for carrying out operations of the present disclosure may include machine language instructions and/or microcode, which may be compiled or interpreted from source code written in any combination of one or more programming languages, including assembly language, Basic, Fortran, Java, Python, R, C, C++, C# or similar programming languages. The computer readable program instructions may execute entirely on a user's personal computer, notebook computer, tablet, or smartphone, entirely on a remote computer or computer server, or any combination of these computing devices. The remote computer or computer server may be connected to the user's device or devices through a computer network, including a local area network or a wide area network, or a global network (i.e., the Internet). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by using information from the computer readable program instructions to configure or customize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flow diagrams and block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood by those skilled in the art that each block of the flow diagrams and block diagrams, and combinations of blocks in the flow diagrams and block diagrams, can be implemented by computer readable program instructions. The computer readable program instructions that may implement the systems and methods described in this disclosure may be provided to one or more processors (and/or one or more cores within a processor) of a general purpose computer, special purpose computer, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create a system for implementing the functions specified in the flow diagrams and block diagrams in the present disclosure. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having stored instructions is an article of manufacture including instructions which implement aspects of the functions specified in the flow diagrams and block diagrams in the present disclosure. The computer readable program instructions may also be loaded onto a computer, other programmable apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions specified in the flow diagrams and block diagrams in the present disclosure. FIG.7is a functional block diagram illustrating a networked system800of one or more networked computers and servers. In an embodiment, the hardware and software environment illustrated inFIG.7may provide an exemplary platform for implementation of the software and/or methods according to the present disclosure. Referring toFIG.7, a networked system800may include, but is not limited to, computer805, network810, remote computer815, web server820, cloud storage server825and computer server830. In some embodiments, multiple instances of one or more of the functional blocks illustrated inFIG.7may be employed. Additional detail of computer805is shown inFIG.7. The functional blocks illustrated within computer805are provided only to establish exemplary functionality and are not intended to be exhaustive. And while details are not provided for remote computer815, web server820, cloud storage server825and computer server830, these other computers and devices may include similar functionality to that shown for computer805. Computer805may be a personal computer (PC), a desktop computer, laptop computer, tablet computer, netbook computer, a personal digital assistant (PDA), a smart phone, or any other programmable electronic device capable of communicating with other devices on network810. Computer805may include processor835, bus837, memory840, non-volatile storage845, network interface850, peripheral interface855and display interface865. Each of these functions may be implemented, in some embodiments, as individual electronic subsystems (integrated circuit chip or combination of chips and associated devices), or, in other embodiments, some combination of functions may be implemented on a single chip (sometimes called a system on chip or SoC). Processor835may be one or more single or multi-chip microprocessors, such as those designed and/or manufactured by Intel Corporation, Advanced Micro Devices, Inc. (AMD), Arm Holdings (Arm), Apple Computer, etc. Examples of microprocessors include Celeron, Pentium, Core i3, Core i5 and Core i7 from Intel Corporation; Opteron, Phenom, Athlon, Turion and Ryzen from AMD; and Cortex-A, Cortex-R and Cortex-M from Arm.Bus837may be a proprietary or industry standard high-speed parallel or serial peripheral interconnect bus, such as ISA, PCI, PCI Express (PCI-e), AGP, and the like.Memory840and non-volatile storage845may be computer-readable storage media. Memory840may include any suitable volatile storage devices such as Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM). Non-volatile storage845may include one or more of the following: flexible disk, hard disk, solid-state drive (SSD), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash), compact disc (CD or CD-ROM), digital versatile disk (DVD) and memory card or stick. Program848may be a collection of machine readable instructions and/or data that is stored in non-volatile storage845and is used to create, manage and control certain software functions that are discussed in detail elsewhere in the present disclosure and illustrated in the drawings. In some embodiments, memory840may be considerably faster than non-volatile storage845. In such embodiments, program848may be transferred from non-volatile storage845to memory840prior to execution by processor835. Computer805may be capable of communicating and interacting with other computers via network810through network interface850. Network810may be, for example, a local area network (LAN), a wide area network (WAN) such as the Internet, or a combination of the two, and may include wired, wireless, or fiber optic connections. In general, network810can be any combination of connections and protocols that support communications between two or more computers and related devices. Peripheral interface855may allow for input and output of data with other devices that may be connected locally with computer805. For example, peripheral interface855may provide a connection to external devices860. External devices860may include devices such as a keyboard, a mouse, a keypad, a touch screen, and/or other suitable input devices. External devices860may also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data used to practice embodiments of the present disclosure, for example, program848, may be stored on such portable computer-readable storage media. In such embodiments, software may be loaded onto non-volatile storage845or, alternatively, directly into memory840via peripheral interface855. Peripheral interface855may use an industry standard connection, such as RS-232 or Universal Serial Bus (USB), to connect with external devices860. Display interface865may connect computer805to display870. Display870may be used, in some embodiments, to present a command line or graphical user interface to a user of computer805. Display interface865may connect to display870using one or more proprietary or industry standard connections, such as VGA, DVI, DisplayPort and HDMI. As described above, network interface850, provides for communications with other computing and storage systems or devices external to computer805. Software programs and data discussed herein may be downloaded from, for example, remote computer815, web server820, cloud storage server825and computer server830to non-volatile storage845through network interface850and network810. Furthermore, the systems and methods described in this disclosure may be executed by one or more computers connected to computer805through network interface850and network810. For example, in some embodiments the systems and methods described in this disclosure may be executed by remote computer815, computer server830, or a combination of the interconnected computers on network810. Data, datasets and/or databases employed in embodiments of the systems and methods described in this disclosure may be stored and or downloaded from remote computer815, web server820, cloud storage server825and computer server830. <Vehicle Outdoor Environment Identifier> The vehicle outdoor environment identifier111receives outputs from, for example, the cameras70and the radars71mounted on the vehicle1, and identifies a vehicle outdoor environment. The identified vehicle outdoor environment includes at least a road and an obstacle. In this embodiment, the vehicle outdoor environment identifier111compares three-dimensional information on surroundings of the vehicle1with a vehicle outdoor environment model, based on data of the cameras70and the radars71to thereby estimate a vehicle environment including a road and an obstacle. The vehicle outdoor environment model is, for example, a learned model generated by deep learning, and is capable of recognizing a road, an obstacle, and other objects with respect to three-dimensional information on surroundings of the vehicle. In a non-limiting example, a process is described about how a learned model is trained, according to the present teachings. The example will be in the context of a vehicle external environment estimation circuitry (e.g., a trained model saved in a memory and applied by a computer). However, other aspects of the trained model for object detection/avoidance, route generation, controlling steering, braking, etc., are implemented via similar processes to acquire the learned models used in the components of the computational device110. Hereinafter, as part of a process for determining how a computing device1000calculates a route path (R2, R13, R12, or R11for example on a road5) in the presence of an obstacle3(another vehicle) surrounded by a protection zone (see dashed line that encloses unshaded area) will be explained. In this example, the obstacle3is a physical vehicle that has been captured by a forward looking camera from the trailing vehicle1. The model is hosted in a single information processing unit (or single information processing circuitry). First, by referring toFIG.8, a configuration of the computing device1000will be explained. The computing device1000may include a data extraction network2000and a data analysis network3000. Further, to be illustrated inFIG.10, the data extraction network2000may include at least one first feature extracting layer2100, at least one Region-Of-Interest (ROI) pooling layer2200, at least one first outputting layer2300and at least one data vectorizing layer2400. And, also to be illustrated inFIG.8, the data analysis network3000may include at least one second feature extracting layer3100and at least one second outputting layer3200. Below, an aspect of calculating a safe route (e.g. R13), around a protection zone that surrounds the obstacle will be explained. Moreover, the specific aspect is to learn a model to detect obstacles (e.g., vehicle1) on a roadway, and also estimate relative distance to a superimposed protection range that has been electronically superimposed about the vehicle3in the image. To begin with, a first embodiment of the present disclosure will be presented. First, the computing device1000may acquire at least one subject image that includes a superimposed protection zone about the subject vehicle3. By referring toFIG.9, the subject image may correspond to a scene of a highway, photographed from a vehicle1that is approaching another vehicle3from behind on a three lane highway. After the subject image is acquired, in order to generate a source vector to be inputted to the data analysis network3000, the computing device1000may instruct the data extraction network2000to generate the source vector including (i) an apparent distance, which is a distance from a front of vehicle1to a back of the protection zone surrounding vehicle3, and (ii) an apparent size, which is a size of the protection zone. In order to generate the source vector, the computing device1000may instruct at least part of the data extraction network2000to detect the obstacle3(vehicle) and protection zone. Specifically, the computing device1000may instruct the first feature extracting layer2100to apply at least one first convolutional operation to the subject image, to thereby generate at least one subject feature map. Thereafter, the computing device1000may instruct the ROI pooling layer2200to generate one or more ROI-Pooled feature maps by pooling regions on the subject feature map, corresponding to ROIs on the subject image which have been acquired from a Region Proposal Network (RPN) interworking with the data extraction network2000. And, the computing device1000may instruct the first outputting layer2300to generate at least one estimated obstacle location and one estimated protection zone region. That is, the first outputting layer2300may perform a classification and a regression on the subject image, by applying at least one first Fully-Connected (FC) operation to the ROI-Pooled feature maps, to generate each of the estimated obstacle location and protection zone region, including information on coordinates of each of bounding boxes. Herein, the bounding boxes may include the obstacle and a region around the obstacle (protection zone). After such detecting processes are completed, by using the estimated obstacle location and the estimated protection zone location, the computing device1000may instruct the data vectorizing layer2400to subtract a y-axis coordinate (distance in this case) of an upper bound of the obstacle from a y-axis coordinate of the closer boundary of the protection zone to generate the apparent distance, and multiply a distance of the protection zone and a horizontal width of the protection zone to generate the apparent size of the protection zone. After the apparent distance and the apparent size are acquired, the computing device1000may instruct the data vectorizing layer2400to generate at least one source vector including the apparent distance and the apparent size as its at least part of components. Then, the computing device1000may instruct the data analysis network3000to calculate an estimated actual protection zone by using the source vector. Herein, the second feature extracting layer3100of the data analysis network3000may apply second convolutional operation to the source vector to generate at least one source feature map, and the second outputting layer3200of the data analysis network3000may perform a regression, by applying at least one FC operation to the source feature map, to thereby calculate the estimated protection zone. As shown above, the computing device1000may include two neural networks, i.e., the data extraction network2000and the data analysis network3000. The two neural networks should be trained to perform the processes properly, and thus below it is described how to train the two neural networks by referring toFIG.10andFIG.11. First, by referring toFIG.10, the data extraction network2000may have been trained by using (i) a plurality of training images corresponding to scenes of subject roadway conditions for training, photographed from fronts of the subject vehicles for training, including images of their corresponding projected protection zones (protection zones superimposed around a forward vehicle, which is an “obstacle” on a roadway) for training and images of their corresponding grounds for training, and (ii) a plurality of their corresponding GT obstacle locations and GT protection zone regions. The protection zones do not occur naturally, but are previously superimposed about the vehicle3via another process, perhaps a bounding box by the camera. More specifically, the data extraction network2000may have applied aforementioned operations to the training images, and have generated their corresponding estimated obstacle locations and estimated protection zone regions. Then, (i) each of obstacle pairs of each of the estimated obstacle locations and each of their corresponding GT obstacle locations and (ii) each of obstacle pairs of each of the estimated protection zone locations associated with the obstacles and each of the GT protection zone locations may have been referred to, in order to generate at least one vehicle path loss and at least one distance, by using any of loss generating algorithms, e.g., a smooth-L1 loss algorithm and a cross-entropy loss algorithm. Thereafter, by referring to the distance loss and the path loss, backpropagation may have been performed to learn at least part of parameters of the data extraction network2000. Parameters of the RPN can be trained also, but a usage of the RPN is a well-known prior art, thus further explanation is omitted. Herein, the data vectorizing layer2400may have been implemented by using a rule-based algorithm, not a neural network algorithm. In this case, the data vectorizing layer2400may not need to be trained, and may just be able to perform properly by using its settings inputted by a manager. As an example, the first feature extracting layer2100, the ROI pooling layer2200and the first outputting layer2300may be acquired by applying a transfer learning, which is a well-known prior art, to an existing object detection network such as VGG or ResNet, etc. Second, by referring toFIG.11, the data analysis network3000may have been trained by using (i) a plurality of source vectors for training, including apparent distances for training and apparent sizes for training as their components, and (ii) a plurality of their corresponding GT protection zones. More specifically, the data analysis network3000may have applied aforementioned operations to the source vectors for training, to thereby calculate their corresponding estimated protection zones for training. Then each of distance pairs of each of the estimated protection zones and each of their corresponding GT protection zones may have been referred to, in order to generate at least one distance loss, by using said any of loss algorithms. Thereafter, by referring to the distance loss, backpropagation can be performed to learn at least part of parameters of the data analysis network3000. After performing such training processes, the computing device1000can properly calculate the estimated protection zone by using the subject image including the scene photographed from the front of the subject roadway. Hereafter, another embodiment will be presented. A second embodiment is similar to the first embodiment, but different from the first embodiment in that the source vector thereof further includes a tilt angle, which is an angle between an optical axis of a camera which has been used for photographing the subject image (e.g., the subject obstacle) and a distance to the obstacle. Also, in order to calculate the tilt angle to be included in the source vector, the data extraction network of the second embodiment may be slightly different from that of the first one. In order to use the second embodiment, it should be assumed that information on a principal point and focal lengths of the camera are provided. Specifically, in the second embodiment, the data extraction network2000may have been trained to further detect lines of a road in the subject image, to thereby detect at least one vanishing point of the subject image. Herein, the lines of the road may denote lines representing boundaries of the road located on the obstacle in the subject image, and the vanishing point may denote where extended lines generated by extending the lines of the road, which are parallel in the real world, are gathered. As an example, through processes performed by the first feature extracting layer2100, the ROI pooling layer220and the first outputting layer2300, the lines of the road may be detected. After the lines of the road are detected, the data vectorizing layer240may find at least one point where the most extended lines are gathered, and determine it as the vanishing point. Thereafter, the data vectorizing layer2400may calculate the tilt angle by referring to information on the vanishing point, the principal point and the focal lengths of the camera by using a following formula: θtilt=atan2(vy−cy,fy) In the formula, vy may denote a y-axis (distance direction) coordinate of the vanishing point, cy may denote a y-axis coordinate of the principal point, and fy may denote a y-axis focal length. Using such formula to calculate the tilt angle is a well-known prior art, thus more specific explanation is omitted. After the tilt angle is calculated, the data vectorizing layer2400may set the tilt angle as a component of the source vector, and the data analysis network3000may use such source vector to calculate the estimated protection zone. In this case, the data analysis network3000may have been trained by using the source vectors for training additionally including tilt angles for training. For a third embodiment which is mostly similar to the first one, some information acquired from a subject obstacle DB storing information on subject obstacles, including the subject obstacle, can be used for generating the source vector. That is, the computing device1000may acquire structure information on a structure of the subject vehicle, e.g., 4 doors, vehicle base length of a certain number of feet, from the subject vehicle DB. Or, the computing device1000may acquire topography information on a topography of a region around the subject vehicle, e.g., hill, flat, bridge, etc., from location information for the particular roadway. Herein, at least one of the structure information and the topography information can be added to the source vector by the data vectorizing layer2400, and the data analysis network3000, which has been trained by using the source vectors for training additionally including corresponding information, i.e., at least one of the structure information and the topography information, may use such source vector to calculate the estimated protection zone. As a fourth embodiment, the source vector, generated by using any of the first to the third embodiments, can be concatenated channel-wise to the subject image or its corresponding subject segmented feature map, which has been generated by applying an image segmentation operation thereto, to thereby generate a concatenated source feature map, and the data analysis network3000may use the concatenated source feature map to calculate the estimated protection zone. An example configuration of the concatenated source feature map may be shown inFIG.12. In this case, the data analysis network3000may have been trained by using a plurality of concatenated source feature maps for training including the source vectors for training, other than using only the source vectors for training. By using the fourth embodiment, much more information can be inputted to processes of calculating the estimated protection zone, thus it can be more accurate. Herein, if the subject image is used directly for generating the concatenated source feature map, it may require too much computing resources, thus the subject segmented feature map may be used for reducing a usage of the computing resources. Descriptions above are explained under an assumption that the subject image has been photographed from the back of the subject vehicle, however, embodiments stated above may be adjusted to be applied to the subject image photographed from other sides of the subject vehicle. And such adjustment will be easy for a person in the art, referring to the descriptions. The vehicle outdoor environment identifier111specifies free space or a region where no objects are present through image processing, from images captured by the cameras70. In the image processing herein, a learned model generated by, for example, deep learning is used, such as according to the processes discussed above with respect toFIG.8throughFIG.12. Then, a two-dimensional map representing a free space is generated. The vehicle outdoor environment identifier111acquires information on a target around the vehicle1, from outputs of the radars71. This information is positioning information including a position and a speed, for example, of the target. Thereafter, the vehicle outdoor environment identifier111combines the generated two-dimensional map with the positioning information of the target, and generates a three-dimensional map representing surroundings around the vehicle1. In this embodiment, information on locations and image-capturing directions of the cameras70, and information on locations and transmission directions of the radars71are used. The vehicle outdoor environment identifier111compares the generated three-dimensional map with the vehicle outdoor environment model to thereby estimate a vehicle environment including a road and an obstacle. In deep learning, a deep neural network (DNN) is used. Examples of the DNN include convolutional neural network (CNN). <Candidate Route Generator> The candidate route generator112generates candidate routes on which the vehicle1can travel, based on, for example, an output of the vehicle outdoor environment identifier111, an output of the position sensor SW5, and information transmitted from the vehicle outside communicator72. For example, the candidate route generator112generates a travel route that avoids an obstacle identified by the vehicle outdoor environment identifier111on a road identified by the vehicle outdoor environment identifier111. The output of the vehicle outdoor environment identifier111includes travel route information on a travel route on which the vehicle1travels, for example. The travel route information includes information on a shape of the travel route itself and information on an object on the travel route. The information on the shape of the travel route includes, for example, a shape (linear, curve, or curve curvature), a travel route width, the number of lanes, and a lane width of the travel route. The information on the object includes, for example, a relative position and a relative speed of the object with respect to the vehicle, and attributes (e.g., types and direction of movement) of the object. Examples of the type of the object include a vehicle, a pedestrian, a road, and a mark line. In this embodiment, the candidate route generator112calculates a plurality of candidate routes by a state lattice method, and based on a route cost of each of the calculated candidate routes, selects one or more candidate routes. The routes may be calculated by other methods. The candidate route generator112sets an imaginary grid area on a travel route based on travel route information. This grid area includes a plurality of grid points. With each of the grid points, a position on the travel route is specified. The candidate route generator112sets a predetermined grid point as a target arrival position. Then, the candidate route generator112computes a plurality of candidate routes by a route search using the plurality of grid points in the grid area. In the state lattice method, a route is branched from a given grid point to another grid point located ahead of the given grid point in the traveling direction of the vehicle. Thus, the candidate routes are set so as to pass the plurality of grid points sequentially. The candidate routes include, for example, time information representing times when the vehicle passes the grid points, speed information on, for example, speeds and accelerations at the grid points, and information on other vehicle motions. The candidate route generator112selects one or more travel routes based on a route cost from the plurality of candidate routes. Examples of the route cost include the degree of lane centering, an acceleration of the vehicle, a steering angle, and possibility of collision. In a case where the candidate route generator112selects a plurality of travel routes, the route determiner115selects one travel route. <Vehicle Behavior Estimator> The vehicle behavior estimator113measures a state of the vehicle from outputs of sensors for detecting a behavior of the vehicle, such as a vehicle speed sensor, an acceleration sensor, and a yaw rate sensor. The vehicle behavior estimator113generates a vehicle 6-axis model showing a behavior of the vehicle. The vehicle 6-axis model here is a model of accelerations in 3-axis directions of “front and rear,” “left and right,” and “upward and downward” of the traveling vehicle and angular velocities in 3-axis directions of “pitch,” “roll,” and “yaw.” That is, the vehicle 6-axis model is a numerical model obtained by capturing a motion of the vehicle not only in a plane in terms of a classical vehicle motion engineering, and but also by reproducing a behavior of the vehicle by using a total of six axes of pitching (Y axis), roll (X axis) motion, and movement along a Z axis (upward and downward motion of the vehicle body) of the vehicle body on which an occupant is seated on four wheels with suspensions interposed therebetween. The vehicle behavior estimator113applies the vehicle 6-axis model to the travel route generated by the candidate route generator112, and estimates a behavior of the vehicle1in following the travel route. <Occupant Behavior Estimator> The occupant behavior estimator114estimates especially physical conditions and feelings of a driver, from a detection result of the occupant state sensor SW7. Examples of the physical conditions include good health, mild fatigue, ill health, and loss of consciousness. Examples of the feelings include pleasant, normal, bored, frustrated, and unpleasant. For example, the occupant behavior estimator114extracts a face image of a driver from images captured by, for example, a camera placed in a cabin, and specifies the driver. The extracted face image and information on the specified driver are applied to a human model as inputs. The human model is a learned model generated by deep learning, for example, and physical conditions and feelings are output for each person that can be a driver of the vehicle1. The occupant behavior estimator114outputs the physical conditions and the feelings of the driver output from the human model. In a case where biometric sensors such as a skin temperature sensor, a heart rate sensor, a blood flow sensor, and a sweat sensor, are used for the occupant state sensor SW7for acquiring information on a driver, the occupant behavior estimator measures biometric of the driver from outputs of the biometric sensors. In this case, the human model receives biometrics of each person who can be a driver of the vehicle1, and outputs physical conditions and feelings of the person. The occupant behavior estimator114outputs the physical conditions and the feelings of a driver output from the human model. As the human model, a model that estimates feelings of a human to a behavior of the vehicle1may be used with respect to a person who can be a driver of the vehicle1. In this case, an output of the vehicle behavior estimator113, biometrics of the driver, and estimated feelings are managed chronologically to constitute a model. This model enables, for example, a relationship between a heightened emotion (consciousness) of a driver and a behavior of the vehicle to be predicted. The occupant behavior estimator114may include a human body model as a human model. The human body model specifies, for example, a neck muscle strength supporting a head mass (e.g., 5 kg) and front, rear, left, and right G. When receiving a motion (acceleration G and jerk) of the vehicle body, the human body model outputs predicted physical feelings and subjective feelings of an occupant. Examples of the physical feelings of the occupant include comfortable/moderate/uncomfortable, and examples of subjective feelings include unpredicted/predictable. Since a vehicle body behavior in which the head of the occupant is bent over backward even slightly, for example, is uncomfortable to the occupant, a travel route causing such a behavior is not selected by referring to the human body model. On the other hand, a vehicle body behavior with which the head moves forward as if the occupant makes a bow allows the occupant to take a posture against this vehicle body behavior easily, and thus, does not make the occupant feel uncomfortable immediately. Thus, a travel route causing such a behavior can be selected. Alternatively, a target motion may be determined to avoid shaking of the head of the occupant or may be dynamically determined to make the head active, by referring to the human body model The occupant behavior estimator114applies the human model to a vehicle behavior estimated by the vehicle behavior estimator113, and estimates a change of physical conditions and a change of feelings of the current driver with respect to the vehicle behavior. <Route Determiner> The route determiner115determines a route on which the vehicle1is to travel, based on an output of the occupant behavior estimator114. In a case where the candidate route generator112generates one generated route, the route determiner115sets this route as a route on which the vehicle1is to travel. In a case where the candidate route generator112generates a plurality of generated routes, in consideration of an output of the occupant behavior estimator114, the route determiner115selects a route on which an occupant (especially a driver) feels comfortable most, that is, a route on which a driver does not feel redundancy such as excessive caution in avoiding an obstacle, among a plurality of candidate routes. <Rule-Based Route Generator> The rule-based route generator120identifies an object outside the vehicle according to a predetermined rule and generates a travel route avoiding the object, based on outputs from the cameras70and the radars71, without using deep learning. In a manner similar to the candidate route generator112, the rule-based route generator120calculates a plurality of candidate routes by a state lattice method, and based on a route cost of each of the candidate routes, selects one or more candidate routes. The rule-based route generator120calculates a route cost based on, for example, a rule in which the vehicle does not enter within a few or several meters around the object. The rule-based route generator120may also calculate a route by other methods. Information of routes generated by the rule-based route generator120is input to the vehicle motion determiner116. <Backup> The backup130generates a route for guiding the vehicle1to a safe area such as a road shoulder based on outputs from the cameras70and the radars71, in a case where a sensor, for example, is out of order or an occupant is not in good physical condition. For example, the backup130sets a safe region in which the vehicle1can be brought to an emergency stop from information of the position sensor SW5, and generates a travel route to the safe area. In a manner similar to the candidate route generator112, the backup130calculates a plurality of candidate routes by a state lattice method, and based on a route cost of each of the candidate routes, selects one or more candidate routes. This backup130may also calculate a route by other methods. Information on the routes generated by the backup130is input to the vehicle motion determiner116. <Vehicle Motion Determiner> The vehicle motion determiner116sets a target motion for a travel route determined by the route determiner115. The target motion refers to steering and acceleration and speed-reduction that allow the vehicle to follow the travel route. The target motion determiner115computes a motion of the vehicle body by referring to the vehicle 6-axis model, with respect to the travel route selected by the route determiner115. The vehicle motion determiner116determines a target motion that allows the vehicle to follow the travel route generated by the rule-based route generator120. The vehicle motion determiner116determines a target motion that allows the vehicle to follow the travel route generated by the backup130. If the travel route determined by the route determiner115greatly significantly deviates from the travel route generated by the rule-based route generator120, the vehicle motion determiner116selects the travel route generated by the rule-based route generator120as a route on which the vehicle1is to travel. If sensors (especially the cameras70or the radars71), for example, are out of order or a poor physical condition of an occupant is estimated, the vehicle motion determiner116selects the travel route generated by the backup130as a route on which the vehicle1is to travel. <Physical Quantity Calculator> The physical quantity calculator is constituted by the driving force calculator117, the braking force calculator118, and the steering variable calculator119. To achieve a target motion, the driving force calculator117calculates a target driving force to be generated by the power train devices (the engine10and the transmission20). To achieve the target motion, the braking force calculator118calculates a target braking force to be generated by the braking device30. To achieve the target motion, the steering variable calculator119calculates a target steering variable to be generated by the steering device40. <Peripheral Equipment Operation Setter> The peripheral equipment operation setter140sets operations of devices related to the body of the vehicle1, such as a lamp and doors, based on an output of the vehicle motion determiner116. For example, the peripheral equipment operation setter140sets a direction of the lamp when the vehicle1follows the travel route determined by the route determiner115, for example. In the case of guiding the vehicle1to a safe area set by the backup130, for example, the peripheral equipment operation setter140turns hazard flashers on or unlocks the doors, after the vehicle1has reached the safe area. <Output Destination of Computation Device> A computation result in the computation device110is output to the power train ECU200, the brake microcomputer300, the EPAS microcomputer500, and a body-related microcomputer600. Specifically, the power train ECU200receives information on a target driving force calculated by the driving force calculator117, the brake microcomputer300receives information on a target braking force calculated by the braking force calculator118, the EPAS microcomputer500receives information on a target steering variable calculated by the steering variable calculator119, and the body-related microcomputer600receives information on operations of devices related to the body and set by the peripheral equipment operation setter140. As described above, the power train ECU200basically calculates a fuel injection timing of the injector12and an ignition timing of the ignition plug13such that a target driving force is achieved, and outputs control signals to these traveling devices. The brake microcomputer300basically calculates a controlled variable of the brake actuator33such that a target driving force is achieved, and outputs a control signal to the brake actuator33. The EPAS microcomputer500basically calculates the amount of current to be supplied to the EPAS device42such that a target steering variable is achieved, and outputs a control signal to the EPAS device42. As described above, in this embodiment, the computation device110only calculates target physical quantities to be output from the traveling devices, and controlled variables of the traveling devices are calculated by the device controllers200to500. Accordingly, the amount of calculation of the computation device110decreases so that the calculation speed of the computation device110can be increased. The device controllers200to500only need to calculate actual controlled variables and output control signals to the traveling devices (e.g., the injector12), and thus, processing speeds thereof are high. Consequently, responsiveness of the traveling device to vehicle outdoor environments can be increased. Since the device controllers200to500calculate the controlled variables, the computation device110only needs to calculate rough physical quantities. Thus, computation speeds may be lower than those of the device controllers200to500. As a result, computation accuracy of the computation device110can be enhanced. As illustrated inFIG.4, in this embodiment, the power train ECU200, the brake microcomputer300, the DSC microcomputer400, and the EPAS microcomputer500are configured to be communicable with one another. The power train ECU200, the brake microcomputer300, the DSC microcomputer400, and the EPAS microcomputer500are configured to share information on controlled variables of the traveling devices and to be capable of executing control for using the information in cooperation with one another. For example, in a state where a road is slippery, for example, it is required to reduce the rotation speed of the wheels (i.e., so-called traction control) so as not to rotate the wheels idly. To reduce idle rotation of the wheels, an output of the power train is reduced or a braking force of the braking device30is used. Since the power train ECU200and the brake microcomputer300are communicable with each other, an optimum measure using both the power train and the braking device30can be taken. In cornering of the vehicle1, for example, the controlled variables of the power train and the braking device30(including the DSC device36) are finely adjusted in accordance with a target steering variable so that rolling and pitching in which a front portion of the vehicle1sinks are caused to occur in synchronization to cause a diagonal roll position. By causing the diagonal roll position, loads on the outer front wheels50increase so that the vehicle1is allowed to turn with a small steering angle. Thus, it is possible to reduce a rolling resistance on the vehicle1. As another example, in vehicle stability control (dynamic vehicle stability), based on a current steering angle and a current vehicle speed, if a difference occurs between a target yaw rate and a target lateral acceleration calculated as ideal turning state of the vehicle1and a current yaw rate and a current lateral acceleration, the braking devices30for the four wheels are individually actuated or an output of the power train is increased or reduced so as to cause the current yaw rate and the current lateral acceleration to return to the target values. In techniques employed to date, the DSC microcomputer400has to comply with a communication protocol, information on instability of the vehicle is acquired from yaw rate sensors and wheel speed sensors through a relatively low-speed CAN, and actuation is instructed to the power train ECU200and the brake microcomputer300also through the CAN. These techniques take time, disadvantageously. In this embodiment, information on controlled variables can be directly transmitted among these microcomputers. Thus, brake actuation of the wheels and start of output increase/decrease, which are stability control, can be performed significantly early from detection of a vehicle instability state. Reduction of stability control in a case where a driver performs counter steering can also be conducted in real time with reference to a steering angle speed and other information of the EPAS microcomputer500. As yet another example, a front wheel driving vehicle with high power can employ steering angle-linked output control that reduces an output of the power train to avoid an instable state of the vehicle when an accelerator is pressed with a large steering angle. In this control, the power train ECU200refers to a steering angle and a steering angle signal of the EPAS microcomputer500, and reduces an output immediately. Thus, a driving field preferable for a driver without a sense of sudden intervention can be achieved. <Control at Occurrence of Abnormality> Here, during traveling of the vehicle1, abnormalities concerning traveling of the vehicle1, such as knocking in the engine10or slipping of the front wheels50, occur in some cases. At occurrence of such abnormalities, traveling devices need to be controlled quickly in order to eliminate or reduce these abnormalities. As described above, the computation device110identifies vehicle outdoor environment using deep learning, and performs a huge amount of computation in order to calculate routes of the vehicle1. Thus, when computation for eliminating or reducing the abnormalities is performed through the computation device110, measures might be taken with a delay. In view of this, in this embodiment, when an abnormality concerning traveling of the vehicle1is detected, the device controllers200to500calculate the controlled variables of the traveling devices in order to eliminate or reduce the abnormality and cause the traveling devices to output control signals, without using the computation device110. FIG.5shows, as an example, a relationship between sensors SW5, SW8, and SW9for detecting abnormalities in traveling of the vehicle1and the device controllers200,300, and500. InFIG.5, sensors for detecting abnormalities in traveling of the vehicle1are the position sensor SW5, a knocking sensor SW8, and a slipping sensor SW9, but other sensors may be provided. The knocking sensor SW8and the slipping sensor SW9may be known sensors. The position sensor SW5, the knocking sensor SW8, and the slipping sensor SW9correspond to abnormality detectors, and the sensors themselves detect an abnormality in traveling of the vehicle1. For example, when the knocking sensor SW8detects knocking, a detection signal is input to each of the device controllers200to500(especially the power train ECU200). After the detection signal has been input, the power train ECU200reduces knocking by adjusting a fuel injection timing of the injector12and an ignition timing of the ignition plug13. At this time, the power train ECU200calculates controlled variables of the traveling device while allowing a shift of a driving force output from the power train from a target driving force. FIG.6illustrates an example of a behavior of the vehicle1when slipping occurs. InFIG.6, the solid line is an actual travel route of the vehicle1, and a dotted line is a travel route set by the computation device110(hereinafter referred to as a theoretical travel route R). InFIG.6, the solid line and the dotted line partially overlap each other. InFIG.6, a black circle indicates a goal of the vehicle1. As illustrated inFIG.6, supposing a puddle W is present in the middle of the travel route of the vehicle1and the front wheels of the vehicle1go into the puddle W to cause slipping. At this time, as illustrated inFIG.6, the vehicle1temporarily deviates from the theoretical travel route R. Slipping of the front wheels of the vehicle1is detected by the slipping sensor SW9(seeFIG.5), and deviation from the theoretical travel route R is detected by the position sensor SW5(seeFIG.5). These detection signals are input to the device controllers200to500. Thereafter, for example, the brake microcomputer300actuates the brake actuator33so as to increase a braking force of the front wheels. The EPAS microcomputer500actuates the EPAS device42so as to cause the vehicle1to return to the theoretical travel route R. At this time, communication between the brake microcomputer300and the EPAS microcomputer500can optimize a controlled variable of the EPAS device42in consideration of a braking force by the braking device30. In the manner described above, as illustrated inFIG.6, the vehicle1can return to the theoretical travel route R smoothly and quickly so that traveling of the vehicle1can be stabilized. As described above, when an abnormality in traveling of the vehicle1is detected, the device controllers200to500calculate controlled variables of the traveling device in order to eliminate or reduce the abnormality without using the computation device110, and output control signals to the traveling devices. Accordingly, responsiveness of the traveling devices to vehicle outdoor environments can be enhanced. Therefore, in this embodiment, the vehicle travel control device includes: the computation device110; and the device controllers200to500configured to control actuation of the traveling devices (e.g., the injector12) mounted on the vehicle1based on a computation result of the computation device110. The computation device110includes: the vehicle outdoor environment identifier111configured to identify a vehicle outdoor environment based on outputs from the cameras70and the radars71configured to acquire information on the vehicle outdoor environment; the route setter (e.g., the route calculator112) configured to set a route on which the vehicle1is to travel in accordance with the vehicle outdoor environment identified by the vehicle outdoor environment identifier111; the vehicle motion determiner116configured to determine a target motion of the vehicle1in order to follow the route set by the route setter; and the physical quantity calculators117to119configured to calculate target physical quantities in order to achieve the target motion determined by the vehicle motion determiner116. The device controllers200to500calculate controlled variables of the traveling devices such that the target physical quantities calculated by the physical quantity calculators117to119is achieved, and outputs control signals to the traveling devices. In this manner, the computation device110only calculates physical quantities to be achieved, and actual controlled variables of the traveling devices are calculated by the device controllers200to500. Accordingly, the amount of calculation of the computation device110decreases so that the calculation speed of the computation device110can be increased. The device controllers200to500only need to calculate actual controlled variables and output control signals to the traveling devices, and thus, processing speeds thereof are high. Consequently, responsiveness of the traveling devices to the vehicle outdoor environment can be increased. In particular, in this embodiment, the vehicle outdoor environment identifier111identifies a vehicle outdoor environment by using deep learning, and thus, especially the computation device110performs a large amount of calculation. Thus, the controlled variables of the traveling devices are calculated by the device controllers200to500other than the computation device110so that the advantage of further enhancing responsiveness of the traveling devices to the vehicle outdoor environment can be more appropriately obtained. <Other Control> During assisted driving of the vehicle1, the driving force calculator117, the braking force calculator118, and the steering variable calculator119may change a target driving force, for example, in accordance with the state of a driver of the vehicle1. For example, while the driver enjoys driving (e.g., feeling of the driver is “enjoy”), the target driving force, for example, may be reduced so that the driving state approaches manual driving as close as possible. On the other hand, if the driver is in a poor physical condition, the target driving force, for example, is increased so that the driving state approaches automated driving as close as possible. Other Embodiments The technique disclosed here is not limited to the embodiment described above, and can be changed without departing from the gist of claims. For example, in the embodiment described above, the route determiner115determines a route on which the vehicle1is to travel. However, the technique is not limited to this example, and the route determiner115may be omitted, and the vehicle motion determiner116may determine a route on which the vehicle1is to travel. That is, the vehicle motion determiner116may serve as both a part of the route setter and the target motion determiner. In the embodiment described above, the driving force calculator117, the braking force calculator118, and the steering variable calculator119calculate target physical quantities such as a target driving force. However, the technique is not limited to this example, and the driving force calculator117, the braking force calculator118, and the steering variable calculator119may be omitted, and the vehicle motion determiner116may calculate a target physical quantity. That is, the vehicle motion determiner116may serve as both the target motion determiner and the physical quantity calculator. The embodiment described above is merely an example, and the scope of the present disclosure should not be construed as limiting. The scope of the present disclosure is defined by the claims, and all changes and modifications that come within the meaning and range of equivalency of the claims are intended to be embraced within the scope of the present disclosure. INDUSTRIAL APPLICABILITY The technique disclosed here is useful as a vehicle travel control device for controlling traveling of a vehicle. DESCRIPTION OF REFERENCE CHARACTERS 1vehicle12injector (traveling device, power train-related device)13ignition plug (traveling device, power train-related device)16valve mechanism (traveling device, power train-related device)20transmission (traveling device, power train-related device)33brake actuator (traveling device, brake-related device)42EPAS device (traveling device, steering-related device)100vehicle travel control device110computation device111vehicle outdoor environment identifier112route calculator (route setter)113vehicle behavior estimator (route setter)114occupant behavior estimator (route setter)115route determiner (route setter)116vehicle motion determiner (target motion determiner)117driving force calculator (physical quantity calculator)118braking force calculator (physical quantity calculator)119steering variable calculator (physical quantity calculator)200power train ECU (device controller)300brake microcomputer (device controller)400DSC microcomputer (device controller)500EPAS microcomputer (device controller)SW5position sensor (abnormality detector)SW6knocking sensor (abnormality detector)SW7slipping sensor (abnormality detector) | 74,092 |
11858524 | DETAILED DESCRIPTION As discussed above, sensor data captured by sensors on a vehicle can be used to assist in vehicle navigation, object detection, and object avoidance as the vehicle navigates through an environment. However, the quality of the sensor data collected by the sensors may become degraded in certain circumstances, including based on environment factors, such as weather conditions (e.g., rain, snow, etc.). In such cases, the sensor data collected by the sensors may be suboptimal or even unsuitable for use. This may potentially impact the vehicle navigation, obstacle detection, object avoidance, and/or other vehicle functions that rely on the sensor data. As such, the present application is directed to techniques for performing vehicle sensor degradation testing. For instance, and for a given sensor on the vehicle, a system may initially test the vehicle using testing, such as a wind tunnel test, to determine which portion(s) of the sensor surface are obstructed, such as by the accumulation of water droplets. A control surface may then be created for the sensor, where the control surface includes obstruction(s) located at the portion(s) of the control surface. Using the control surface, the vehicle may navigate around an environment and collect sensor data using the sensor, where the sensor data is analyzed to determine the drivability of the vehicle (e.g., the performance or accuracy of the sensor or perception system) with the control surface attached to the sensor. In some instances, the system may further use the results from the wind tunnel testing to determine an accuracy of computer simulation results (e.g., results of computational fluid dynamics and/or particle flow simulation software), where the computer simulation results indicate where water is predicted to accumulate on the surfaces of the sensor. When the accuracy of the results satisfies a threshold, the system may use simulations to determine drivability of the vehicle when the sensor surface is obstructed. For example, the vehicle may initially be tested using a test. In some instances, the test includes a wind tunnel test. To perform the test, a sensor on the vehicle may be replaced with a device that includes at least a camera and a test surface that is configured to replicate the actual surface of the sensor. For instance, the test surface may include the same shape and/or material properties of the actual surface of the sensor. The lens of the camera may be configured to focus the camera on the test surface, such that the camera monitors the test surface during the test. Additionally, or alternatively, in some instances, to perform the test, an external camera may capture images of the surface (e.g., which may also be referred to as the “test surface” during the test) of the sensor. In either instance, the wind tunnel test may apply a substance, such as water droplets, to the vehicle using various testing parameters. The testing parameters may include, but are not limited to, wind speeds, droplet sizes, and/or yaw angles of the vehicle. In some instances, the wind speed used during the test may correspond to a driving speed of the vehicle navigating in an environment. For instance, a wind speed of 55 miles per hour during the test may correspond to a driving speed of 55 miles per hour and, in at least some instances, may be augmented by average storm wind speeds. During the test, the camera may be configured to generate image data representing one or more images depicting characteristics for how the substance interacts with the test surface. The characteristics may include, but are not limited to, how the substance move across the test surface, the portion(s) of the test surface where the substance accumulates, contact angles for the substance onto the test surface, a distribution of the rain droplets on the test surface, and/or any other characteristics. The system may then analyze the images represented by the image data to identify at least the portion(s) of the test surface where the substance accumulates. In some instances, the system identifies the portion(s) of the test surface where the substance accumulates for the various wind speeds, the various droplet sizes, and/or the various yaw angles. In at least some examples, these determinations may be statistical in nature (e.g., mean, median an/or mode spot size, location, etc., duration of spot (which may be based on size), histograms indicative of portions of the control surface indicative of times the portion of the surface had an accumulation, etc.). The system may then use these determinations to create control surfaces for testing the vehicle when navigating around an environment. For example, the system may use the portion(s) of the test surface that are associated with a given wind speed, given droplet size, and/or given yaw angle to determine which portion(s) of a control surface should include obstruction(s) that resemble the accumulation(s) of the substance. In some instances, the obstruction(s) may include a material that has a refractive index that is approximately equal to the refractive index of the substance. For example, if the substance includes water, the refractive index of the material may be between 1.0 and 1.5 (e.g., approximately 1.3). The control surface(s) may then be attached to the corresponding sensor(s) of the vehicle. With the control surface(s) attached, the vehicle may navigate around an environment and, while navigating, the sensor(s) of the vehicle may generate sensor data. The system can then analyze the sensor data to determine a drivability of the vehicle when the sensor(s) are degraded. In some examples, the drivability of the vehicle may correspond to the performance of the sensor(s) of the vehicle when the sensors are degraded. For example, the drivability of the vehicle may correspond to the difference between the accuracy of the sensor(s) when the control surface(s) are not attached and the accuracy of the sensor(s) when the control surface(s) are attached. In some examples, to determine the drivability, and for a sensor, the system may analyze, using a perception component of the vehicle, the sensor data generated by the sensor to determine first statistic(s). In some instances, the first statistic(s) may include, but are not limited to, which object(s) (e.g., pedestrians, vehicles, bicycles, etc.) the perception component detects and/or the location(s) of the detected object(s) when the sensor is degraded. The system may also analyze, using the perception component, sensor data generated by the sensor and/or another sensor to determine second statistic(s). In some instances, the second statistic(s) may include, but are not limited to, which object(s) the perception component detects and/or the location(s) of the detected object(s) when a sensor does not include the control surface (e.g., when the sensor is not degraded). The system may then determine the drivability of the vehicle based on differences between the first statistic(s) and the second statistic(s). For example, the system may determine the drivability of the vehicle based on differences between the identified objects and/or differences on the identified locations of the objects. For a first example, the system may determine a first number of objects that the perception component detects using the sensor data associated with the sensor that was degraded. The system may also determine a second number of objects that the perception component detects using the sensor data associated with the sensor that was not degraded. The system may then determine the drivability based at least in part on the first number of objects and the second number of objects. For instance, the closer the first number of objects is to the second number of objects, the better the drivability of the vehicle. In this example, the first number of objects may include false positives (e.g., detected objects that were actually not located within the environment) and/or false negatives (e.g., not detecting objects that were actually located within the environment). For instance, the first number of objects could be greater than, equal to, or less than the second number of objects. For a second example, the system may determine first location(s) of object(s) that the perception component detects using the sensor data that is associated with the sensor that is degraded. The system may also determine second location(s) of object(s) that the perception component detects using the sensor data associated with the sensor that was not degraded. The system may then determine the drivability based at least in part on the first location(s) and the second location(s). For instance, the closer the first location(s) are to the second location(s), the better the drivability of the vehicle. While these are just a couple example processes for determining the drivability of the vehicle using the generated sensor data, in other examples, the system may perform one or more additional and/or alternative processes to determine the drivability. Examples of determining degradation of a sensor can be found, for example, in U.S. patent application Ser. No. 16/728,910 titled “Sensor Degradation Monitor” and filed Dec. 27, 2019, the entirety of which is herein incorporated by reference. In some instances, the system may further perform sensor degradation testing using vehicle simulations. For example, the system may use a particle-based simulator (and/or other type of simulator) to analyze how a substance, such as water, interacts with the surfaces of the vehicle and, more specifically, how the substance interacts with the surfaces of the sensors. For example, based on performing a simulation, and for at least a sensor, the system may receive data predicting characteristics for how the substance interacts with the surfaces of the sensors at various wind speeds, various droplet sizes, and/or various yaw angles. The characteristics may include, but are not limited to, how the substance droplets move across the surfaces, the portion(s) of the surfaces where the substance droplets accumulate, contact angles for the substance droplets onto the surfaces, distributions of the substance droplets on the surfaces, and/or any other characteristics, including, but not limited to, those characteristics measured during the test(s) described above. For instance, the data may represent at least an image depicting the portion(s) of the surface of the sensor where the substance accumulates, similar to the testing described above. In some instances, the image may correspond to a mesh of the surface, where the mesh indicates the outer surfaces of the substance droplets located on the surface of the sensor. In some instances, the system may then determine an accuracy of the simulation using the images captured during the test and the images generated during the simulation. For example, the system may compare an image representing the test surface for a sensor, which was generated using the test, to an image representing a simulated surface for the same sensor. The image representing the simulated surface may include a simulated representation of the surface of the same sensor. In some instances, the system uses images that were generated using the same wind speed, particle size, and/or yaw angle to ensure that the simulation was similar to the test. Of course, though discussed above for illustrative purposes as a single image, any other temporal determinations and/or statistics over a set of images (as discussed above) are contemplated to form the basis of comparison. Based on the comparison, the system may determine a similarity between the portion(s) of the substance that accumulated on the test surface to the portion(s) of the substance that accumulated on the simulated surface. The system may then quantitatively determine the accuracy based at least in part on the similarity and/or use the results to further improve the simulation (e.g., by modifying one or more parameters) in order to create more realistic simulations. For example, the similarity may correspond to the amount of overlap between the portion(s) of the substance that accumulated on the test surface and the portion(s) of the substance that accumulated on the simulated surface. As described herein, the amount of overlap may correspond to what percentage of the portion(s) of the substance accumulated on the simulated surface correlate to the portion(s) of the substance accumulated on the test surface. The system may determine that there is a greater level of quantitative connection between the test and the simulation when the amount of overlap is high (e.g., 75%, 85%, 95%, etc.). The system may also determine that there is a lower level of quantitative connection when the amount of overlap is low (e.g., 5%, 10%, 15%, 20%, etc.). Furthermore, the system may determine that there is a medium level of quantitative connection when the amount of overlap is between in the middle (e.g., 45%, 50%, 55%, etc.). While this is just one example of analyzing data generated by the test and data generated by the simulation to determine an accuracy of the simulation, in other examples, the system may perform one or more additional and/or alternative analysis. In other examples, the similarity may correspond to other types of characteristics of the substance. For a first example, the similarity may correspond to how the average substance overage on the simulated surface correlates to the average substance coverage on the test surface from the test. For a second example, the similarity may correspond to how the size and/or shape of simulated droplets on the simulated surface correlates to the size and/or shape of substance droplets on the test surface from the test. In some instances, the system may perform simulations to test the drivability of the vehicle using the results from the test and/or the simulator. For example, the system may store a library of image data representing non-degraded images (real images generated by the vehicle, simulated images, etc.). To perform a simulation, the system may modify the images, generated by a sensor of the vehicle, to include synthetically generated obstruction(s) that are based on the portion(s) of the surface of the sensor determined using the test and/or the simulator. In some instances, the images are modified using a transformation or filter to overlay at least one separate image layer that contains the synthetically generated obstruction(s). Thus, a real image generated by the sensor of the vehicle may be overlaid with one or more layers of synthetically generated obstructions representing the accumulation of the substance on the sensor surface. The system may then analyze both the non-degraded images and the degraded images and compare the results to determine the drivability, similar to the analysis described above. For example, the system may analyze, using the perception component of the vehicle, the degraded images to determine which object(s) (e.g., pedestrians, vehicles, bicycles, etc.) the perception component detects and/or the location(s) of the detected object(s) when the sensor is degraded. The system may also analyze, using the perception component, non-degraded images to determine which object(s) the perception component detects and/or the location(s) of the detected object(s) when the sensor is not degraded. The system may then determine the drivability of the vehicle based on a comparison of the detected objects. For a first example, the system may determine a first number of objects that the perception component detects using the degraded images. The system may also determine a second number of objects that the perception component detects using the non-degraded images. The system may then determine the drivability based at least in part on the first number of objects and the second number of objects. For instance, the closer the first number of objects is to the second number of objects, the better the drivability of the vehicle. In this example, the first number of objects may include false positives (e.g., detected objects that were actually not located within the environment) and/or false negatives (e.g., not detecting objects that were actually located within the environment). For instance, the first number of objects could be greater than, equal to, or less than the second number of objects. For a second example, the system may determine first location(s) of object(s) that the perception component detects using the degraded images. The system may also determine second location(s) of object(s) that the perception component detects using the non-degraded images. The system may then determine the drivability based at least in part on the first location(s) and the second location(s). For instance, the closer the first location(s) are to the second location(s), the better the drivability of the vehicle. While these are just a couple example processes for determining the drivability of the vehicle using the simulations, in other examples, the system may perform one or more additional and/or alternative processes to determine the drivability. In some instances, the system may use the data generated using the test, using the environmental driving test that includes the control surface(s), using the simulator, and/or using the simulations to train one or more models associated with the vehicle. For example, the system may use the data to train the one or more models to detect objects, detect locations of objects, and/or classify objects. For a first example, the system may determine a first number of objects that the perception component detects using the degraded images. The system may also determine a second number of objects that the perception component detects using the non-degraded images. The system may then determine the drivability based at least in part on the first number of objects and the second number of objects. For instance, the closer the first number of objects is to the second number of objects, the better the drivability of the vehicle. While this is just one example of determining drivability of the vehicle using the generated sensor data, in other examples, the system may perform one or more additional and/or alternative analysis to determine the drivability. Examples of performing simulations on sensors can be found, for example, in U.S. patent application Ser. No. 16/708,019 titled “Perception Error Models” and filed Dec. 9, 2019, the entirety of which is herein incorporated by reference. By performing the processes described herein, the system is able to better test the drivability of the vehicle when sensors of the vehicle are degraded, such as when it is raining. For example, rather than actually driving the vehicle around an environment when it is raining, where various parameters of the weather (e.g., wind speeds, amount of rain, etc.) can change instantly, the system is able to test the vehicle using a more controlled test. For instance, the system is able to generate various control surfaces for the sensors, where each control surface is associated with given parameters (e.g., wind speed, droplet size, yaw angles, etc.). The system can then use the control surfaces to determine the drivability of the vehicle at set parameters. Similar techniques can be used to test the vehicle in other degraded conditions such as snow, mist, fog, or the like. The techniques described herein can be implemented in a number of ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of sensors for a vehicle, the methods, apparatuses, and systems described herein can be applied to a variety of sensor systems. Additionally, while the above examples describe testing the vehicle for sensor degradation that is caused by substance, in other examples, the methods, apparatus, and systems described herein can be applied to other types of substances, such as mud, snow, and/or any other substance that may obstruct a sensor. FIG.1Ais an example environment100that includes performing a physical test102, such as a wind tunnel test, on a vehicle104in order to determine where a substance, such as water, accumulates on sensors of the vehicle104, in accordance with embodiments of the disclosure. For example, during the physical test102, a substance106may be sprayed onto the vehicle104. In some instances, the substance106are sprayed using various nozzles that are associated with parameters for the physical test102. For example, a system108may be configured to set the parameters for the physical test102, which are represented by parameter data110. The parameters may include, but are not limited to, a wind speed112(e.g., between 15 miles per hours and 60 miles per hour, etc.) at which the substance106are sprayed towards the vehicle104, a size of the substance106(e.g., water droplet size), and/or an angel that the substance106are sprayed with respect to the vehicle104(e.g., a yaw angle, such as between 0 and 15 degrees). During the physical test102, cameras114(1)-(2) (also referred to as “cameras114”) may be configured to capture images of the substance106accumulating on surfaces of the sensors. In some instances, a camera114(1) may be external to the vehicle104and positioned such that the focal point of the camera114(1) includes a surface of a sensor116(1). For instance, and as shown in the example ofFIG.1A, the camera114(1) may be capturing images118(1)-(M) (also referred to as “images118”) representing at least the surface of the sensor116(1). In some instances, the surface of the sensor116(1) may be referred to as a “test surface” during the physical test102. Additionally, or alternatively, in some instances, a sensor116(2) (which is illustrated in the example ofFIG.1B) of the vehicle104may be replaced by a testing device120. The testing device120may include a camera114(2) and a lens122(which, in some examples, may be part of the camera114(2)) that focuses the camera114(2) on a test surface124of the testing device120. For instance, and as shown in the example ofFIG.1A, the camera114(2) may be capturing images126(1)-(N) (also referred to as “images126”) representing at least the test surface124of the testing device120. In some instances, the test surface124of the testing device120may include a similar shape and/or similar material properties as the actual surface of the sensor116(2). This way, the substance106will accumulate on the test surface124of the testing device120similarly to how the substance106would accumulate on the actual surface of the sensor116(2). In some instances, the test surface126of the testing device120may include a similar material as the actual surface of the sensor116(2). However, in other examples, the test surface126of the testing device120may include a different material than the material of the actual surface of the sensor116(2). The system108may receive, over network(s)128, image data130representing the images118captured by the camera114(1) and/or the images126captured by the camera114(2). The system108may then analyze the image data130in order to determine the surface fluid flow on the surfaces of the sensors116and/or determine where the substance106accumulate on the surfaces of the sensors116. For example, the system108may initially use a frame component132that grabs images (e.g., frames) represented by the image data130. In some instances, the frame component132includes a frame grabber that is configured to grab the images at a given frequency. For example, and if the cameras114include a frame rate of 60 frames per second, the frame component132may grab every image, every other image, one image per second, one image per minute, and/or the like. In some instances, the system108may associate the images with various parameters of the physical test102. For example, the system108may determine that the image126(1) was captured by the camera114(2) when the physical test102was operating with first parameters (e.g., the wind speed was a first velocity, the substance size included a first size, and/or the yaw angle included a first angle). As such, the system108may associate the image126(1) with the first parameters. The system108may then determine that the image126(N) was captured by the camera114(2) when the physical test102was operating with second parameters (e.g., the wind speed was a second velocity, the substance size included a second size, and/or the yaw angle included a second angle). As such, the system108may associate the image126(N) with the second parameters. The system108may then analyze the images using an accumulation component134to determine the characteristics associated with the substance106, such where the substance106accumulate on the surfaces of the sensors116. For a first example, the accumulation component134may analyze the image data representing the image118(1) and, based on the analysis, determine that the image118(1) represents substance106at a location136of the image118(1), where the location136is associated with outer surfaces of the substance106. The accumulation component134may then determine that the location136of the image118(1) corresponds to a specific portion of the surface of the sensor116(1). For example, based on the configuration of the camera114(1), the accumulation component134may associate various locations of the images118with various portions on the surface of the sensor116(1). As such, the accumulation component134may use the associations to determine the portion of the sensor116(1). For a second example, the accumulation component134may analyze the image data representing the image126(1) and, based on the analysis, determine that the image126(1) represents substance106at locations138(1)-(2) of the image126(1), where the locations138(1)-(2) correspond to outer surfaces of the substance106. The accumulation component134may then determine that the locations138(1)-(2) of the image126(1) corresponds to specific portions of the test surface124of the testing device120. For example, based on the configuration of the camera114(2), the accumulation component134may associate various locations of the images126with various portions on the test surface124of the testing device120. As such, the accumulation component134may use the associations to determine the portions of the test surface124of the testing device120. In some instances, the accumulation component134may perform similar processes in order to analyze additional images118and126that are associated with other parameters for the physical test102. This way, the system108is able to determine how the substance106accumulate on the surfaces of the sensors116during different weather and/or driving conditions. For example, the system108may be able to determine how the substance106accumulate on the surfaces of the sensors116for different wind speeds, different levels of output (e.g., different levels of rain), different speeds of the vehicle104, and/or so forth. FIG.1Bis an example environment that includes performing a driving test140on the vehicle104using control surfaces142(1)-(2) (also referred to as “control surfaces142”) for the sensors116of the vehicle104, in accordance with embodiments of the disclosure. For example, the system108may use a control surface component144to generate control data146representing the portions of the surfaces of the sensors116where the substance106accumulated during the physical test102. The control data146may then be used to create the control surface142for the sensors116. In some instances, a control surface may include a filter (e.g., plastic) that includes obstruction(s) located at the portion(s) of the filter, where the portion(s) of the filter corresponds to the portion(s) of a surface of a sensor where the substance106accumulated during the physical test102. Additionally, or alternatively, in some instances, a control surface may include the actual surface of a sensor. However, obstruction(s) may be attached to the portion(s) of the surface of the sensor where the substance106accumulated during the physical test102. In some instances, the obstruction(s) may include a material that has a refractive index that is approximately equal to the refractive index of water. For example, and if the substance includes water, the refractive index of the material may be between 1.0 and 1.5 (e.g.,1.333). This way, the obstruction(s) on the control surface can mimic how the substance accumulation would affect the sensor. For a first example, control data146associated with the sensor116(1) may be used to create the control surface142(1) for the sensor116(1). As shown, the control surface142(1) includes an obstruction148located on a portion150of the control surface142(1), where the control data146indicates that s substance, such as water, would accumulate on the portion150of the surface of the sensor116(1). In some instances, the control surface142(1) includes a filter that is placed onto the surface of the sensor116(1). In other instances, the control surface142(1) includes the surface of the sensor116(1) with the obstruction148attached to the surface of the sensor116(1). For a second example, control data146associated with the sensor116(2) may be used to create the control surface142(2) for the sensor116(2). As shown, the control surface142(2) includes obstructions152(1)-(2) located on portion154(1)-(2) of the control surface142(2), where the control data146indicates that a substance, such as water, would accumulate on the portions154(1)-(2) of the surface of the sensor116(2). In some instances, the control surface142(2) includes a filter that is placed onto the surface of the sensor116(2). In other instances, the control surface142(2) includes the surface of the sensor116(2) with the obstructions152(1)-(2) attached to the surface of the sensor116(2). With the control surfaces142attached to the sensors116of the vehicle104, the vehicle104may perform the driving test140. For example, the vehicle104may navigate around an environment and, while navigating, generate sensor data using the sensors116of the vehicle104. The vehicle104may further analyze the sensor data using one or more components (e.g., localization component, perception component, planning component, progress component, etc.) of the vehicle104. Based on the analysis, the vehicle104may determine how to navigate. Additionally, the vehicle104may send, over the network(s)128, log data156to the system108. The system108may use the log data156to determine a drivability of the vehicle104when the sensors116are degraded. In some instances, similar processes may be performed in order to create various control surfaces that are associated with various weather and/or driving conditions. For a first example, control surfaces may be created that are associated with light rain conditions and a vehicle speed of 25 miles per hour. For a second example, control surfaces may be created that are associated with heavy rain conditions and a vehicle speed of 60 miles per hour. As such, the system108is able to test the vehicle104using different weather and/or driving conditions and in order to determine the drivability of the vehicle104for the weather and/or driving conditions. While the example ofFIGS.1A-1Billustrate generating control surfaces142for cameras114of the vehicle104, in other examples, similar processes may be used to generate control surfaces for other types of sensors on the vehicle104. For example, similar tests may be performed in order to determine how substances accumulate on the surfaces of other sensors of the vehicle104. Based on the tests, control surfaces may be created and placed on the other surfaces. The vehicle104may then perform the driving test140using those control surfaces in order to determine the degree to which the substances degrade the sensors. FIG.2illustrates a flow diagram of an example process for determining a drivability of a vehicle that includes at least one control surface on at least one sensor, in accordance with embodiments of the disclosure. At operation202, the process200may include generating first sensor data using a degraded sensor. For instance, the vehicle104may generate the first sensor data204, such as first image data representing a first image(s), using the sensor116(2) and/or a second sensor. When generating the first sensor data204, the sensor116(2) and/or the second sensor may not be degraded. For example, the sensor116(2) may not include the control surface142(2). In some instances, when using another sensor, the other sensor may be similar to the sensor116(2) and located at a location on the vehicle104that is close to the sensor116(2) (e.g., on the same side, etc.). This way, the sensor116(2) and the other sensor are generating similar sensor data. At operation206, the process200may include analyzing the first sensor data to identify one or more objects. For instance, the vehicle104may analyze the first sensor data204using one or more components (e.g., the perception component) of the vehicle104. Based on the analysis, the vehicle104may identify at least a first object208(e.g., a pedestrian) and a second object (e.g., a street sign). Additionally, in some instances, based on the analysis, the vehicle104may identify a first location of the first object208and a second location of the second object210. At operation212, the process200may include generating second sensor data using a degraded sensor. For instance, the vehicle104may generate the second sensor data214, such as second image data representing second image(s), using the sensor116(2). When generating the second sensor data214, the sensor116(2) may include the control surface142(2). In other words, the sensor116(2) may be degraded when generating the second sensor data214. In some instances, the same sensor generates both the first sensor data204and the second sensor data214. For example, the sensor116(2) may generate the first sensor data204without the control surface142(2) and then generate the second sensor data142(2) with the control surface142(2). In such an example, the vehicle104may navigate the same environment when generating the first sensor data204and the second sensor data214such that the sensor data204,214generated by the sensor116(2) represents the same objects. For instance, the environment may be a controlled environment where the objects are stationary for the testing. Additionally, or alternatively, in some instances, the sensor116(2) may generate the second sensor data214using the control surface142(2) while another sensor (e.g., the sensor116(1)) generates the first sensor data204without a control surface. The other sensor may be placed proximate to the sensor116(2) such that the field of view of the other sensor at least partially overlaps the field of view of the sensor116(2). This way, the second sensor data214generated by the other sensor should represent at least some of the same objects as the first sensor data204generated by the sensor116(2). When performing such a test, the vehicle104(and/or the system108) may determine which portions of the fields of view overlap so that the vehicle104can determine which objects should be detected by both of the sensors. Additionally, or alternatively, in some instances, the same sensor116(2) may be used to generate the first sensor data204and the second sensor data214. However, the vehicle104may move at a slow enough pace and/or a shutter associated with the sensor116(2) may be fast enough such that successive images generated by the sensor116(2) are nearly identical. Additionally, the sensor116(2) may take the images such that a first image is not distorted (e.g., the first sensor data204) and a second, successive image is distorted (e.g., the second sensor data214). At operation216, the process200may include analyzing the second sensor data to identify one or more second objects. For instance, the vehicle104may analyze the second sensor data214using the one or more components of the vehicle104. Based on the analysis, the vehicle104may identify the first object208. Additionally, in some instances, based on the analysis, the vehicle104may identify a third location of the first object208. However, since the control surface142(2) includes the obstructions152that degrade the sensor116(2), the vehicle104may not identify the second object210when analyzing the second sensor data214. At operation218, the process200may include determining a sensor accuracy based at least in part on the one or more first objects and the one or more second objects. For instance, the system108(and/or the vehicle104) may determine the sensor accuracy of the sensor116(2) based at least in part on the one or more first objects identified using the first sensor data204and the one or more second objects identified using the second sensor data214. For a first example, if the vehicle104is able to detect the same objects with a non-degraded sensor and a degraded sensor, then the system108may determine that the sensor accuracy is good. For a second example, and as illustrated in the example ofFIG.2, if the vehicle104is not able to detect all of the objects with the degraded sensor, then the system108may determine that the sensor accuracy is not good. In some instances, the sensor accuracy may correspond to the drivability of the vehicle104when at least the sensor116(2) is degraded. In some instances, the system108may use one or more additional and/or alternative processes, which are described above, to determine the sensor accuracy. For example, the system108may use the identified locations of the objects to determine the sensor accuracy. In some instances, the vehicle104may perform the process200ofFIG.2more than once when determining the accuracy of the sensor. This way, the vehicle104(and/or the system108) can use statistical sampling in order to determine the accuracy of the sensor. FIG.3is an example environment300that includes performing a simulated test302on a simulated vehicle304in order to determine where a substance, such as water, accumulates on sensors306(1)-(2) (also referred to as “sensors306”) of the vehicle304, in accordance with embodiments of the disclosure. In some instances, the system108uses a simulator component308that generates a particle-based simulator in order to analyze how a substance310, such as water droplets, interact with the surfaces of the vehicle304and, more specifically, the surfaces of the sensors306. In other instances, the simulator component308may generate other types of simulators in order to analyze how the substance310interact with the surfaces of the vehicle304. As shown, the vehicle304may be similar to the vehicle104tested using the physical test102. When performing the simulated test302, the system108may set parameters, which may also be represented by parameter data110. As discussed above, the parameters may include, but are not limited to, a wind speed312(e.g., between 15 miles per hours and 60 miles per hour, etc.) at which the substance310are sprayed towards the vehicle304, a size of the substance310, and/or an angel that the substance310are sprayed with respect to the vehicle304(e.g., a yaw angle between 0 and 15 degrees). In some instances, the system108uses the same parameters for the simulated test302as used during the physical test102. In other instances, the system108uses different parameters for the simulated test302as those used during the physical test102. The system108may receive simulation data314representing the results of the simulated test302. In some instances, the simulation data314represents images depicting how the substance310accumulated on the surfaces of the sensors306, similar to the results from the physical test102. The images may represent meshes of the surfaces of the sensors306, where the meshes indicate the outer surfaces of the substance310located on the surfaces of the sensors306. For a first example, the simulation data314may represent images316(1)-(M) depicting locations318(1)-(2) where the substance310accumulated on the surface of the sensor306(1). In some instances, the images316(1)-(M) may represent a mesh of the surface of the sensor306(1), where the mesh indicates the locations318(1)-(1). For a second examples, the simulation data314may represent images320(1)-(N) depicting a location322where the substance310accumulated on the surface of the sensor306(2). In some instances, the images320(1)-(N) may represent a mesh of the surface of the sensor306(2), where the mesh indicates the location322. FIG.4illustrates a flow diagram of an example process400for determining a quantitative connection between a wind tunnel water test and a simulated water test, in accordance with embodiments of the disclosure. At operation402, the process400may include performing a test on a vehicle. For instance, the physical test102may be performed on the vehicle104in order to determine how a substance accumulates on sensors of the vehicle104. During the physical test102, cameras located on the vehicle104and/or external to the vehicle104may generate sensor data representing images depicting the surfaces of the sensors. At operation404, the process400may include receiving sensor data representing an actual accumulation of a substance on a sensor. For instance, the system108may receive the sensor data from the vehicle104and/or the cameras. In some instances, the sensor data represents first images406. However, in other instances, the sensor data may include other types of data (e.g., statistical histograms, averages, etc.). The sensor data may be associated with one or more parameters for testing the sensor of the vehicle104. For example, the first sensor data may be associated with a specified wind speed, a specified size of the substance (e.g., a size of water droplets), and/or a specified angel that the substance is sprayed with respect to the vehicle104(e.g., yaw angle). As shown, the first images406depict location408(1)-(2) of the substance on the surface of the sensor. At operation410, the process400may include performing a simulation associated with the vehicle. For instance, the system108may perform the simulated test302on the vehicle304that includes the simulation of the sensor. The simulated test302may include a particle-based simulation to determine how the substance accumulates on at least the sensor of the vehicle104. To perform the simulated test302, the system108may use the same testing parameters as those that are associated with the first sensor data. In other words, the system108may cause the simulated test302performed on the vehicle304to be as close to the physical test102performed on the vehicle104. At operation412, the process400may include generating simulation data representing a simulated accumulation of the substance on the sensor. For instance, based on the simulated test302, the system108may generate the simulation data. In some instances, the simulation data represents second images414. However, in other instances, the simulation data may include other types of data (e.g., statistical histograms, averages, etc.). As shown, the second images414may correspond to a mesh of the surface of the sensor. As shown, the second images414also depicts two locations416(1)-(2) of a substance on the surface of the sensor. At operation418, the process400may include analyzing the sensor data with respect to the simulation data to determine a connection. For instance, the system108may compare the first images406to the second images414. In some instances, the comparison may include determining an amount of overlap between the locations408(1)-(2) represented by the first images406and the locations416(1)-(2) represented by the second images414. In some instances, the system108performs the comparison using the coordinate systems represented by the first images406and the second images414. For example, and using the same coordinate system, the system108may determine how many coordinate points that represent the substance from the first images406match coordinate points that represent the substance from the second images414. In some instances, based on determining that the amount of is equal to or greater than a first threshold, the system108may determine that there is a high quantitative connection between the physical test102and the simulated test302. Additionally, based on determining that the amount of overlap is between the first threshold and a second threshold, the system108may determine that there is a medium quantitative connection between the physical test102and the simulated test302. Finally, based on determining that the amount of overlap is below the second threshold, the system108may determine that there is a low quantitative connection between the physical test102and the simulated test302. In some instances, such as when the system108determines that there is a high quantitative connection between the physical test102and the simulated test302, the system108may use the results from the physical test102to improve further simulations performed on the vehicle104. For example, the system108may use the results from physical tests102in order to determine how to degrade the sensors when performing simulations. In other words, the system108is able to use the results from the physical tests102in order to perform simulations that better represent how substances accumulate on the sensors of the vehicle104. Such processes are described inFIG.5. FIG.5illustrates a flow diagram of an example process500for determining an accuracy of a sensor of vehicle by simulating an accumulation of a substance, such as water, on the sensor of the vehicle, in accordance with embodiments of the disclosure. At operation502, the process500may include receiving sensor data generated a sensor of a vehicle. For instance, the vehicle104may generate the first sensor data504, such as image data representing image(s), using a sensor of the vehicle104. The system108may then receive the first sensor data504from the vehicle104. In some instances, the system108then stores the first sensor data504in a library of sensor data. At operation506, the process500may include analyzing the first sensor data to identify one or more first objects. For instance, the system108may perform a simulation using one or more components of the vehicle104in order to analyze the first sensor data504. In some instances, the simulation may include analyzing the first sensor data504using a perception component of the vehicle104. Based on the simulation, the system108may determine that the one or more components identified at least a first object508and a second object510. In some instances, based on the simulation, the system108may further determine that the one or more components identified a first location of the first object508and a second location of the second object510. At operation512, the process500may include generating second sensor data by modifying the first sensor data with one or more synthetically generated obstructions. For instance, the system108may generate the second sensor data514, such as image data representing image(s) with the synthetically generated obstructions516(1)-(2). In some instances, the first sensor data504is modified using a transformation or filter to overlay at least one separate image layer that contains the synthetically generated obstructions516(1)-(2) onto the image(s) represented by the first image data504. Thus, a real image generated by the sensor of the vehicle104may be overlaid with one or more layers of synthetically generated obstructions516(1)-(2) representing the accumulation of the substance on the sensor surface. At operation518, the process500may include analyzing the second sensor data to identify one or more second objects. For instance, the system108may again perform a simulation using the one or more components of the vehicle104in order to analyze the second sensor data514. In some instances, the simulation may include analyzing the second sensor data514using a perception component of the vehicle104. Based on the simulation, the system108may determine that the one or more components identified at least the first object508. In some instances, based on the simulation, the system108may further determine that the one or more components identified a third location of the first object508. At operation520, the process500may include determining a sensor accuracy based at least in part on the one or more first objects and the one or more second objects. For instance, the system108may determine the sensor accuracy of the sensor based at least in part on the one or more first objects identified using the first sensor data504and the one or more second objects identified using the second sensor data514. For a first example, if the one or more components able to detect the same objects based on analyzing the first sensor data504and the second sensor data514, then the system108may determine that the sensor accuracy is good. For a second example, and as illustrated in the example ofFIG.5, if the one or more components are not able to identify the same objects based on analyzing the first sensor data504and the second sensor data514, then the system108may determine that the sensor accuracy is not good. In some instances, the sensor accuracy may correspond to the drivability of the vehicle104when at least the sensor is degraded. In some instances, the system108may use one or more additional and/or alternative processes, which are described above, to determine the sensor accuracy. For example, the system108may use the identified locations of the objects to determine the sensor accuracy. FIG.6depicts a block diagram of an example system600for implementing the techniques described herein, in accordance with embodiments of the disclosure. In at least one example, the system600can include the vehicle104. The vehicle104can include a vehicle computing device602, one or more sensor systems604, one or more emitters606, one or more communication connections608, at least one direct connection610, and one or more drive modules612. The vehicle computing device602can include one or more processors614and a memory616communicatively coupled with the one or more processors614. In the illustrated example, the vehicle104is an autonomous vehicle. However, the vehicle104may be any other type of vehicle (e.g., a manually driven vehicle, a semi-autonomous vehicle, etc.), or any other system having at least an image capture device. In the illustrated example, the memory616of the vehicle computing device602stores a localization component618, a perception component620, a planning component622, one or more system controllers624, and one or more maps626. Though depicted inFIG.6as residing in the memory616for illustrative purposes, it is contemplated that the localization component618, the perception component620, the planning component622, the system controller(s)624, and/or the map(s)626can additionally, or alternatively, be accessible to the vehicle104(e.g., stored on, or otherwise accessible by, memory remote from the vehicle104). In at least one example, the localization component618can include functionality to receive sensor data628from the sensor system(s)604and to determine a position and/or orientation of the vehicle104(e.g., one or more of an x-, y-, z-position, roll, pitch, or yaw). For example, the localization component618can include and/or request/receive a map of an environment and can continuously determine a location and/or orientation of the vehicle104within the map. In some instances, the localization component618can utilize SLAM (simultaneous localization and mapping), CLAMS (calibration, localization and mapping, simultaneously), relative SLAM, bundle adjustment, non-linear least squares optimization, or the like to receive image data, lidar data, radar data, IMU data, GPS data, wheel encoder data, and the like to accurately determine a location of the vehicle104. In some instances, the localization component618can provide data to various components of the vehicle104to determine an initial position of the vehicle104for generating a candidate trajectory, as discussed herein. In some instances, the perception component620can include functionality to perform object detection, segmentation, and/or classification. In some instances, the perception component620can provide processed sensor data628that indicates a presence of an object that is proximate to the vehicle104and/or a classification of the object as an object type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional and/or alternative examples, the perception component620can provide processed sensor data628that indicates one or more characteristics associated with a detected object and/or the environment in which the object is positioned. In some instances, characteristics associated with an object can include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), an orientation (e.g., a roll, pitch, yaw), an object type (e.g., a classification), a velocity of the object, an acceleration of the object, an extent of the object (size), etc. Characteristics associated with the environment can include, but are not limited to, a presence of another object in the environment, a state of another object in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc. In general, the planning component622can determine a path for the vehicle104to follow to traverse through an environment. For example, the planning component622can determine various routes and trajectories and various levels of detail. For example, the planning component622can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For the purpose of this discussion, a route can be a sequence of waypoints for travelling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc. Further, the planning component622can generate an instruction for guiding the vehicle104along at least a portion of the route from the first location to the second location. In at least one example, the planning component622can determine how to guide the vehicle104from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some instances, the instruction can be a trajectory, or a portion of a trajectory. In some instances, multiple trajectories can be substantially simultaneously generated (e.g., within technical tolerances) in accordance with a receding horizon technique, wherein one of the multiple trajectories is selected for the vehicle104to navigate. In at least one example, the planning component622can determine a pickup location associated with a location. As used herein, a pickup location can be a specific location (e.g., a parking space, a loading zone, a portion of a ground surface, etc.) within a threshold distance of a location (e.g., an address or location associated with a dispatch request) where the vehicle104can stop to pick up a passenger. In at least one example, the planning component622can determine a pickup location based at least in part on determining a user identity (e.g., determined via image recognition or received as an indication from a user device, as discussed herein). Arrival at a pickup location, arrival at a destination location, entry of the vehicle by a passenger, and receipt of a “start ride” command are additional examples of events that may be used for event-based data logging. In at least one example, the vehicle computing device602can include the system controller(s)624, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle104. These system controller(s)624can communicate with and/or control corresponding systems of the drive module(s)612and/or other components of the vehicle104. The memory616can further include the map(s)626that can be used by the vehicle104to navigate within the environment. For the purpose of this discussion, a map can be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. In some instances, a map can include, but is not limited to: texture information (e.g., color information (e.g., RGB color information, Lab color information, HSV/HSL color information), and the like), intensity information (e.g., lidar information, radar information, and the like); spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)), reflectivity information (e.g., specularity information, retroreflectivity information, BRDF information, BSSRDF information, and the like). In one example, a map can include a three-dimensional mesh of the environment. In some instances, the map can be stored in a tiled format, such that individual tiles of the map represent a discrete portion of an environment and can be loaded into working memory as needed. In at least one example, the map(s)626can include at least one map (e.g., images and/or a mesh). In some example, the vehicle104can be controlled based at least in part on the map(s)626. That is, the map(s)626can be used in connection with the localization component618, the perception component620, and/or the planning component622to determine a location of the vehicle104, identify entities in an environment, and/or generate routes and/or trajectories to navigate within an environment. In some instances, aspects of some or all of the components discussed herein can include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory616can be implemented as a neural network. As described herein, an exemplary neural network is a biologically inspired algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network can also comprise another neural network, or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can utilize machine learning, which can refer to a broad class of such algorithms in which an output is generated based at least in part on learned parameters. Although discussed in the context of neural networks, any type of machine learning can be used consistent with this disclosure. For example, machine learning algorithms can include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 2 (ID2), Chi-squared automatic interaction detection (CHAD), decision stump, conditional decision trees), Bayesian algorithms (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet70, ResNet101, VGG, DenseNet, PointNet, and the like. As discussed above, in at least one example, the sensor system(s)604can include lidar sensors, radar sensors, ultrasonic transducers, sonar sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units (IMUs), accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, time of flight, etc.), microphones, wheel encoders, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. The sensor system(s)604can include multiple instances of each of these or other types of sensors. For instance, the lidar sensors can include individual lidar sensors located at the corners, front, back, sides, and/or top of the vehicle104. As another example, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle104. The sensor system(s)604can provide input to the vehicle computing device602. Additionally or alternatively, the sensor system(s)604can send the sensor data628, via the one or more network(s)128, to a control system630at a particular frequency, after a lapse of a predetermined period of time, upon occurrence of one or more conditions, in near real-time, etc. The vehicle104can also include the emitter(s)606for emitting light and/or sound, as described above. The emitter(s)606in this example include interior audio and visual emitters to communicate with passengers of the vehicle104. By way of example and not limitation, interior emitters can include speakers, lights, signs, display screens, touch screens, haptic emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners, etc.), and the like. The emitter(s)606in this example also include exterior emitters. By way of example and not limitation, the exterior emitters in this example include lights to signal a direction of travel or other indicator of vehicle action (e.g., indicator lights, signs, light arrays, etc.), and one or more audio emitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians or other nearby vehicles, one or more of which comprising acoustic beam steering technology. The vehicle104can also include the communication component(s)608that enable communication between the vehicle104and one or more other local or remote computing device(s). For instance, the communication connection(s)608can facilitate communication with other local computing device(s) on the vehicle104and/or the drive module(s)612. Also, the communication connection(s)608can allow the vehicle104to communicate with other nearby computing device(s) (e.g., other nearby vehicles, traffic signals, etc.). The communications connection(s)608also enable the vehicle104to communicate with the remote teleoperations computing devices or other remote services. The communications connection(s)608can include physical and/or logical interfaces for connecting the vehicle computing device602to another computing device or a network, such as network(s)128. For example, the communications connection(s)608can enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth®, cellular communication (e.g., 2G, 2G, 4G, 4G LTE, 5G, etc.) or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s). In at least one example, the vehicle104can include one or more drive modules612. In some instances, the vehicle104can have a single drive module612. In at least one example, if the vehicle104has multiple drive modules612, individual drive modules612can be positioned on opposite ends of the vehicle104(e.g., the front and the rear, etc.). In at least one example, the drive module(s)612can include one or more sensor systems to detect conditions of the drive module(s)612and/or the surroundings of the vehicle104. By way of example and not limitation, the sensor system(s)604can include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive modules, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect entities in the surroundings of the drive module, lidar sensors, radar sensors, etc. Some sensors, such as the wheel encoders can be unique to the drive module(s)612. In some cases, the sensor system(s)604on the drive module(s)612can overlap or supplement corresponding systems of the vehicle104(e.g., sensor system(s)604). The drive module(s)612can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle104, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port, etc.). Additionally, the drive module(s)612can include a drive module controller which can receive and preprocess the sensor data628from the sensor system(s)604and to control operation of the various vehicle systems. In some instances, the drive module controller can include one or more processors and memory communicatively coupled with the one or more processors. The memory can store one or more modules to perform various functionalities of the drive module(s)612. Furthermore, the drive module(s)612also include one or more communication connection(s) that enable communication by the respective drive module with one or more other local or remote computing device(s). In at least one example, the direct connection610can provide a physical interface to couple the one or more drive module(s)612with the body of the vehicle104. For example, the direct connection610can allow the transfer of energy, fluids, air, data, etc. between the drive module(s)612and the vehicle104. In some instances, the direct connection610can further releasably secure the drive module(s)612to the body of the vehicle104. As further illustrated inFIG.6, the control system630can include processor(s)632, communication connection(s)634, and memory636. Additionally, the system108can include processor(s)638, communication connection(s)640, and memory642. The processor(s)614of the vehicle104, the processor(s)632of the control system630, and/or the processor(s)638of the system108(and/or other processor(s) described herein) can be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s)614, the processor(s)632, and the processor(s)638can comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that can be stored in registers and/or memory. In some instances, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices can also be considered processors in so far as they are configured to implement encoded instructions. The memory616, the memory636, and the memory642(and/or other memory described herein) are examples of non-transitory computer-readable media. The memory616, the memory636, and the memory642can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory can be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein can include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. It should be noted that whileFIG.6is illustrated as a distributed system, in alternative examples, components of the system108can be associated with the vehicle104and/or the control system630and/or components of the vehicle104can be associated with the system108and/or the control system630. That is, the vehicle104can perform one or more of the functions associated with the system108and/or the control system630, and the system108can perform one or more of the functions associated with the vehicle104and/or the control system630. FIG.7illustrates a flow diagram of an example process700for analyzing sensor data generated during a wind tunnel test in order to generate data that may be used to create control surfaces for sensors of a vehicle, in accordance with embodiments of the disclosure. At operation702, the process700may include receiving image data generated by a camera during a vehicle test. For instance, the system108may receive the image data from the vehicle104and/or the camera. The vehicle test may include a wind tunnel test to determine how a substance, such as water, accumulates on one or more surfaces of one or more sensors of the vehicle104. In some instances, the camera is located within a device that replaces the sensor on the vehicle104. In other instances, the camera is external to the vehicle104and configured to monitor the surface of the sensor. At operation704, the process700may include selecting image(s) represented by the image data. For instance, the system108may use a frame grabber to grab images (e.g., frames) represented by the image data. The system108may then select the image(s) from the images. In some instances, the image(s) may be associated with at least one parameter used during the vehicle test. For instance, the image(s) may be associated with a wind speed of the vehicle test, a size of the substance (e.g., rain droplet size), and/or a yaw angle of the vehicle104during the vehicle test. In some instances, the system108selects the image(s) when there is a steady state. For example, the system108may select image(s) when the image(s) continue to depict accumulation(s) that have been located on the surface of the sensor for a given period of time (e.g., ten seconds, thirty seconds, one minute, etc.). This may indicate a steady state of the surface of the sensor. At706, the process700may include determining if the image(s) represent a substance located on a test surface associated with the sensor. For instance, the system108may determine whether the image(s) depict the substance located on the test surface associated with the sensor. In some instances, the substance may include water droplets that have accumulated on the test surface associated with the sensor. However, in other instances, the substance may include snow, dirt, and/or any other substance that may accumulate on surfaces of sensors. In some instances, such as when the camera is included in the device that replaces the sensor, the test surface may include the outer surface of the device that replicates the actual surface of the sensor. In other instances, such as when the camera is external to the vehicle104, the test surface may include the actual surface of the sensor. If, at operation706, it is determined that the image(s) do not depict the substance located on the test surface associated with the sensor, then the process700may repeat back at operation704to select new image(s). However, if, at operation706, it is determined that the image(s) depict the substance located on the test surface associated with the sensor, then at operation710, the process700may include determining portion(s) of a control surface that correspond to location(s) of the substance of the test surface. For instance, the system108may use the location(s) of the substance on the test surface to determine the portion(s) of the control surface. In some instances, the control surface includes a similar shape as the test surface and as such, the portion(s) of the control surface may correspond to the location(s) of the substance on the test surface. At operation712, the process700may include generating control data used to create the control surface, the control data indicating at least the portion(s) of the control surface for placing obstruction(s). For instance, the system108may generate the control data, where the control data indicates the portion(s) on the control surface for placing obstruction(s) that replicate the location(s) of the substance. In some instances, the control surface is the actual surface of the sensor with the obstruction(s) attached to the surface. In other instances, the control surface is a filter that includes the obstruction(s), where the filter is placed on the surface of the filter. In either instance, the obstruction(s) include a material that has a refractive index that is approximately equal (e.g., within 5%, within 10%, etc.) to a refractive index of the substance. At operation714, the process700may include determining if additional image(s) should be analyzed. For instance, the system108may determine whether to analyze other image(s) that is associated with at least one different parameter of the vehicle test. If, at operation714, it is determined that the additional image(s) should be analyzed, then the process700may repeat back at operation704to select the additional image(s). However, if, at operation714, it is determined that the additional image(s) should not be analyzed, then at operation716, the process700may include finishing the vehicle test. For instance, the system108may determine to finish the vehicle test. FIG.8illustrates a flow diagram of an example process800for quantitatively determining an accuracy of a simulation performed to determine how a substance accumulates on sensors of a vehicle, in accordance with embodiments of the disclosure. At operation802, the process800may include analyzing a vehicle using a simulator. For instance, the system108may analyze the vehicle304, which represents a simulation of the vehicle104, using the simulator. The simulator may be used to determine how a substance (e.g., water, snow, mud, etc.) accumulates on the surfaces of the sensors of the vehicle104. For example, the simulator may include a particle-based simulator. At operation804, the process800may include generating simulated image(s) depicting a simulated surface of a sensor of the vehicle. For instance, based on analyzing the vehicle104using the simulator, the system108may generate the simulated image(s) of the simulated surface. In some instances, the simulated image(s) may be associated with one or more parameters of the simulator. For instance, the simulated image may be associated with a wind speed of the simulation, a substance size (e.g., rain droplet size of the simulation), and/or a yaw angle of the vehicle104during the simulation. At operation806, the process800may include determining if the simulated image(s) depict a substance located on the simulated surface. For instance, the system108may analyze the simulated image(s) to determine if the simulated image(s) depict the substance located on the simulated surface. In some instances, the substance may include water droplets that have accumulated on the simulated surface associated with the sensor. However, in other instances, the substance may include snow, dirt, and/or any other substance that may accumulate on surfaces of sensors. If, at operation806, it is determined that the simulated image(s) do not depict the substance located on the simulated surface, then the process800may repeat back at804to generate new simulated image(s). For instance, if the system108determines that the simulated image(s) do not depict any substance located on the simulated surface, then the system108may determine that the substance did not accumulate on the simulated surface. However, if, at operation806, it is determined that the simulated image(s) depict the substance located on the simulated surface, then at operation808, the process800may include analyzing the simulated image(s) with respect to actual image(s) depicting the substance located on an actual surface of the sensor of the vehicle. For instance, the system108may compare the simulated image(s) to the actual image(s) depicting the substance, where the actual image(s) were generated during a physical test of the vehicle104. In some instances, based on the comparison, the system108may determine the amount of overlap between the location(s) of the substance depicted by the simulated image(s) and the location(s) of the substance depicted by the actual image(s). Additionally, or alternatively, in some instances, based on the comparison, the system108may determine the difference between the average spot size, min/max spot size, average spot location, average spot duration, spot duration vs spot size, and/or most common location between the substance depicted by the simulated image(s) and the substance depicted by the actual image(s). At operation810, the process800may include determining a connection between the simulated image(s) and the actual image(s). For instance, based on the analysis, the system108may determine a quantitative connection between the simulated image(s) and the actual image(s). In some instances, the quantitative connection may be based on the amount of overlap between the location(s) of the substance depicted by the simulated image and the location(s) of the substance depicted by the actual image. For example, the greater the amount of overlap, the greater the quantitative connection between the simulated image and the actual image. Additionally, or alternatively, in some instances, the quantitative connection may be based on the amount of overlap between the average spot size, min/max spot size, average spot location, average spot duration, spot duration vs spot size, and/or most common location. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claims. The components described herein represent instructions that may be stored in any type of computer-readable medium and may be implemented in software and/or hardware. All of the methods and processes described above may be embodied in, and fully automated via, software code components and/or computer-executable instructions executed by one or more computers or processors, hardware, or some combination thereof. Some or all of the methods may alternatively be embodied in specialized computer hardware. Conditional language such as, among others, “may,” “could,” “may” or “might,” unless specifically stated otherwise, are understood within the context to present that certain examples include, while other examples do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that certain features, elements and/or steps are in any way required for one or more examples or that one or more examples necessarily include logic for deciding, with or without user input or prompting, whether certain features, elements and/or steps are included or are to be performed in any particular example. Conjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is to be understood to present that an item, term, etc. may be either X, Y, or Z, or any combination thereof, including multiples of each element. Unless explicitly described as singular, “a” means singular and plural. Any routine descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code that include one or more computer-executable instructions for implementing specific logical functions or elements in the routine. Alternate implementations are included within the scope of the examples described herein in which elements or functions may be deleted, or executed out of order from that shown or discussed, including substantially synchronously, in reverse order, with additional operations, or omitting operations, depending on the functionality involved as would be understood by those skilled in the art. Many variations and modifications may be made to the above-described examples, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. Example Clauses A: One or more computing devices comprising: one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more computing devices to perform operations comprising: receiving image data associated with a vehicle during a wind tunnel test, the image data comprising an image depicting a test surface associated with a sensor of the vehicle; determining a characteristic associated with water accumulated on the test surface during the wind tunnel test; determining, based at least in part on the characteristic, a control surface; receiving, from the sensor, sensor data, the sensor data being distorted, at least in part, by the control surface; determining a level of degradation of the sensor data; and controlling the vehicle based at least in part on the level of degradation. B: The one or more computing devices as recited in paragraph A, the operations further comprising determining a parameter under which the wind tunnel test was performed, the parameter comprising at least one of: a vehicle speed; a droplet size; or a yaw angle associated with the vehicle, wherein the control surface is determined further based at least in part on the parameter. C: The one or more computing devices as recited in paragraph A or paragraph B, wherein: the control surface comprises an artificial raindrop adhered to an external surface of the control surface, the artificial raindrop having an index of refraction between 1.2 and 1.4, having a size determined based at least in part on the characteristic, and a location on the control surface based at least in part on the characteristic, and the control surface is placed in a path of the sensor to cause the sensor data to be distorted. D: The one or more computing devices as recited in any of paragraphs A-C, wherein: the image data comprises a plurality of first images captured by the sensor, the sensor data comprises a plurality of second images, and the operations further comprise: determining a first statistic associated with the image data; determining a second statistic associated with the sensor data; and determining a difference between the first statistic and the second statistic, wherein the first statistic and the second statistic comprise one or more of: an average location of an accumulation on the test surface, an average size of the accumulation, or an average duration the accumulation presents on the test surface. E: The one or more computing devices as recited in any of paragraphs A-D, wherein: the control surface comprises a simulated raindrop, the simulated raindrop having a size determined based at least in part on the characteristic, and a location on the control surface based at least in part on the surface, the sensor data comprises additional image data, and the additional image data is distorted based at least in part on the simulated raindrop. F: A method comprising: receiving image data generated by a camera associated with a vehicle during a test, the image data representing at least an image depicting a test surface associated with a sensor of the vehicle; determining an accumulation of a substance on a portion of the test surface; determining a portion of a control surface associated with the sensor that corresponds to the accumulation on the portion of the test surface; and generating control data for creating the control surface, the control data indicating at least the portion of the control surface. G: The method as recited in paragraph F, wherein the control surface comprises an artificial material adhered to an external surface of the control surface, the artificial material including at least one of: an index of refraction that is approximately equal to an index of refraction of the substance; a size that is based at least in part on the accumulation; or a location on the control surface that is based at least in part on the accumulation. H: The method as recited in paragraph F or paragraph G, further comprising determining a parameter associated with the test, the parameter comprising at least one of: a vehicle speed; a droplet size; or a yaw angle associated with the vehicle, wherein generating the control data is based at least in part on the parameter. I: The method as recited in any of paragraphs F-H, further comprising: receiving sensor data from the sensor, the sensor data being distorted, at least in part, by the control surface; and determining a level of degradation of the sensor data. J: The method as recited in any of paragraphs F-I, wherein determining the level of degradation of the sensor data comprises at least: determining a first statistic associated with the sensor data, the sensor data representing one or more objects; determining a second statistic associated with additional sensor data, the additional sensor data representing the one or more objects; and determining a difference between the first statistic and the second statistic. K: The method as recited in any of paragraphs F-J, further comprising: receiving first sensor data generated by the sensor; generating second sensor data by distorting the first sensor data, the distorting of the first sensor data being based at least in part on the control surface; and determining a level of degradation of the second sensor data. L: The method as recited in any of paragraphs F-K, wherein determining the level of degradation of the second sensor data comprises at least: determining a first statistic associated with the first sensor data, the first sensor data representing one or more objects; determining a second statistic associated with second sensor data, the second sensor data representing the one or more objects; and determining a difference between the first statistic and the second statistic M: The method as recited in any of paragraphs F-L, further comprising determining at least one characteristic associated with the accumulation of the substance on the portion of the test surface, the at least one characteristic including at least one of: a contact angle associated with the accumulation; a size of the accumulation; a location of the accumulation; or a distribution associated with the accumulation, wherein generating the control data is based at least in part on the at least one characteristic. N: The method as recited in any of paragraphs F-M, wherein the camera includes at least one of: a first camera located within a device that includes the test surface, the device being positioned proximate a location of the sensor during the test; or a second camera that is external to the vehicle, the second camera being oriented towards the test surface of the sensor. O: The method as recited in any of paragraphs F-N, wherein the test includes a wind tunnel test and the substance includes water, and wherein determining the accumulation on the portion of the test surface comprises determining at least one of: a location of the water on the test surface; a size of the water on the test surface; a distribution of the water on the test surface; or an average duration that the water presents on the test surface P: One or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause one or more computing devices to perform operations comprising: receiving image data associated with a vehicle during a test, the image data comprising one or more images depicting a test surface associated with the vehicle; determining, based at least in part on the image data, an accumulation of a substance on a portion of the test surface; determining a portion of a control surface based at least in part on the accumulation of the substance on the portion of the test surface; and generating control data indicating at least the portion of the control surface, the control data to create the test surface for the sensor of the vehicle. Q: The one or more non-transitory computer-readable media as recited in paragraph P, wherein the control surface comprises an artificial material adhered to an external surface of the control surface, the artificial material including at least one of: an index of refraction that is approximately equal to an index of refraction of the substance; a size that is based at least in part on the accumulation; or a location on the control surface that is based at least in part on the accumulation. R: The one or more non-transitory computer-readable media as recited in paragraph P or paragraph Q, the operations further comprising determining that the control data is associated with at least one of: a vehicle speed; a droplet size; or a yaw angle associated with the vehicle. S: The one or more non-transitory computer-readable media as recited in any of paragraphs P-R, the operations further comprising: receiving sensor data from the sensor, the sensor data being distorted, at least in part, by the control surface; and determining a level of degradation of the sensor data. T: The one or more non-transitory computer-readable media as recited in any of paragraphs P-S, the operations further comprising determining at least one characteristic associated with the accumulation of the substance on the portion of the test surface, the at least one characteristic including at least one of: a contact angle associated with the accumulation; a size of the accumulation; a location of the accumulation; or a distribution associated with the accumulation, wherein generating the control data is based at least in part on the characteristic. | 92,424 |
11858525 | DETAILED DESCRIPTION OF THE INVENTION In order to facilitate the understanding of those skilled in the art, the present disclosure will be further illustrated below with reference to the embodiments and the accompanying drawings, and the contents mentioned in the implementations are not intended to limit the present disclosure. Referring toFIG.1, a drive-by-wire chassis cyber-physical system under an intelligent traffic environment according to the present disclosure includes: an SoS-level CPS, a system-level CPS, and a unit-level CPS, through Internet, data transmission is realized between a plurality of unit-level CPSs and one system-level CPS, and data transmission is realized between a plurality of system-level CPSs and one SoS-level CPS. The unit-level CPS is a drive-by-wire chassis, as shown inFIG.2, including: a driver input module, a basic control module, an execution module, and an environment perception module. The driver input module includes: an accelerator pedal and a stroke and force sensor thereof, a brake pedal and a stroke and force sensor thereof, a steering wheel and a steering angle and torque sensor thereof, and a wheel steering angle sensor, for perceiving driving, braking and steering information input by a driver to a vehicle, so as to realize extraction of a driver operation intention. The basic control module processes data collected by each sensor, formulates an optimal traveling strategy according to a current working condition, and transmits the optimal traveling strategy to the execution module. The execution module is configured to receive the optimal traveling strategy of the basic control module, and manipulate the vehicle. The execution module includes: a wheel, a hub motor, a steering execution motor, a steering controller, a steering shaft, a transmission shaft, a rack and pinion steering gear, a steering pull rod, a brake controller, a braking execution mechanism, a brake motor, a driving controller, and a driving execution mechanism. The optimal traveling strategy is an execution state of the execution module conforming to a current working condition, and the optimal traveling strategy includes: an optimal steering strategy, an optimal braking strategy, an optimal driving strategy, and an optimal composite traveling strategy; the optimal steering strategy, the optimal braking strategy and the optimal driving strategy are formulated under a single working condition of steering, braking and driving respectively; the optimal composite traveling strategy is a combination of the optimal steering strategy and the optimal braking strategy or the optimal driving strategy; the optimal steering strategy includes that actual energy consumption of the steering execution motor is minimum, and a wheel steering angle does not need to be corrected by the driver, the optimal braking strategy includes that energy consumption of the brake motor is minimum, an execution time of the braking execution mechanism is shortest, and correction by the driver is not needed in an execution process of the braking execution mechanism; and the optimal driving strategy includes that energy consumption of the hub motor is minimum, an execution time of the driving execution mechanism is shortest, and correction by the driver is not needed in an execution process of the driving execution mechanism. The environment perception module includes: a detection device, a positioning device, and a communication device; the detection device is configured to perceive information outside the vehicle and information about a road condition ahead; the positioning device is configured to position the vehicle; and the communication device is configured for vehicle-to-vehicle communication and vehicle-to-base station communication so as to obtain real-time working condition information in a vehicle traveling process. The basic control module includes: a central control unit, a steering control unit, a braking control unit and a driving control unit; and the central control unit is configured to monitor and control the steering control unit, the braking control unit and the driving control unit, and receive each sensor signal to calculate a vehicle speed and distribute steering force, braking force and driving force. The system-level CPS is a supervision platform and includes: a collaborative control module and a real-time monitoring and diagnosis module, for supervising a driving behavior of vehicles loaded with drive-by-wire chassis on the same road. The collaborative control module is configured to obtain sensor data of the supervised drive-by-wire chassis and execution information issued by the execution module, obtain a local optimal solution under a current working condition through information interaction and real-time analysis, and issue a control signal to the basic control module; and the real-time monitoring and diagnosis module is configured to monitor and diagnose a driving situation of the vehicles loaded with the drive-by-wire chassis. The SoS-level CPS is a big data platform and includes: a data storage unit, a data interaction module, and a data analysis module, and performs data transmission with each supervision platform through Internet. The data storage unit is configured to store data transmitted to the big data platform; the data interaction module is configured for transmission of the sensor data and the execution information between the drive-by-wire chassis and the supervision platform; and the data analysis module is configured to analyze the data transmitted to the big data platform, so as to obtain an ideal operation of the drive-by-wire chassis, and judge whether a driving operation of the drive-by-wire chassis is the ideal operation. In addition, the information outside the vehicle includes: information of a road lane line, a road surface arrow sign, a roadside traffic sign, and a traffic light. The information about the road condition ahead includes a bumpy obstacle, a vehicle, and a pedestrian ahead. The local optimal solution is a traveling behavior of all the drive-by-wire chassis of the same supervision platform, including steering, braking, driving, steering and braking, and steering and driving. The data transmitted to the big data platform includes: sensor data, execution information, the information outside the vehicle, the information about the road condition ahead, position information, vehicle-to-vehicle communication information, vehicle-to-base station communication information, and the local optimal solution generated by the supervision platform. The ideal operation of the drive-by-wire chassis is data in an ideal operation database, including an ideal steering wheel angle, an ideal brake pedal opening degree, and an ideal accelerator pedal opening degree; the ideal steering wheel angle is a magnitude of a steering wheel angle required by a desired path planned by the data analysis module; the ideal brake pedal opening degree is a brake pedal opening degree planned by the data analysis module to maintain a traffic safe distance from ahead and surrounding obstacles and ensure the driving comfort of the driver; and the ideal accelerator pedal opening degree is an accelerator pedal opening degree planned by the data analysis module and meeting a speed requirement of a traffic environment to ensure the driving comfort of the driver and maintain the traffic safety distance from the surrounding obstacles. The ideal operation database is an offline synchronization database, which consists of vehicle engineer experience data, automobile dynamic and kinematic model data, and automobile traveling data in the traffic environment by offline synchronization; the data in the ideal operation database are all within a range of safe driving and ensuring the comfort of the driver; the vehicle engineer experience data includes driver comfort data under driver steering, braking, driving, steering and braking or driving conditions, and nonlinear mathematical model data of driver steering, braking and driving operating force as well as the vehicle speed and acceleration; the automobile dynamic and kinematic model includes a dynamic and kinematic equation during steering, braking and driving execution calculated by Newton's laws of motion, and current equations of the steering execution motor, the brake motor, the driving motor and the hub motor during steering, braking, and driving execution calculated by a Kirchhoff's law; and the automobile traveling data in the traffic environment is driving information data stored by a networked drive-by-wire chassis automobile in a networked condition. Further, a steering connection relationship between the driver input module, the basic control module and the execution module is as follows: the steering wheel angle and torque sensor is integrated on a steering wheel, the steering wheel is connected to the transmission shaft through the steering shaft, the transmission shaft is connected to the rack and pinion steering gear, and the rack and pinion steering gear is connected to the steering pull rod; the steering execution motor is fixed to the transmission shaft, when the steering wheel is turned, the steering angle and torque sensor works, the steering controller will collect and transmit steering wheel angle and torque and wheel steering angle information to the steering control unit, and the steering control unit controls current output of the steering execution motor according to the sensor information so as to control steering of the transmission shaft; and the steering controller is connected to the hub motor to control rotation of the four wheels. A braking connection relationship between the driver input module, the basic control module and the execution module is as follows: when the brake pedal is stepped on, the stroke and force sensor of the brake pedal works, the brake controller will collect and transmit stroke and force sensor information of the brake pedal to the braking control unit, and the braking control unit controls current output of the brake motor according to the sensor information, and then controls the execution state of the braking execution mechanism to realize braking of the vehicle; and the brake controller is connected to the hub motor to control a rotation state of the wheels during braking. A driving connection relationship between the driver input module, the basic control module and the execution module is as follows: when the accelerator pedal is stepped on, the stroke and force sensor of the accelerator pedal works, the driving controller will collect and transmit stroke and force sensor information of the accelerator pedal to the driving control unit, and the driving control unit controls the execution state of the driving execution mechanism according to the sensor information to realize driving of the vehicle; and the driving controller is connected to the hub motor of the wheels to control a rotation state of the wheels during accelerating. The hub motor includes: a left front wheel hub motor, a right front wheel hub motor, a left rear wheel hub motor and a right rear wheel hub motor; and the four wheel hub motors are respectively integrated in corresponding four wheel hubs for driving the wheels. The drive-by-wire chassis, the supervision platform, and the big data platform complete data transmission through the Internet. The data transmission process is as follows: the drive-by-wire chassis obtains driver operation information and environment information after the driver completes the driving operation, and transmits the operation information and environment information to the supervision platform; the real-time monitoring and diagnosis module of the supervision platform performs real-time monitoring and diagnosis on the driver operation information and environment information transmitted by the drive-by-wire chassis, and transmits a diagnosis result to the big data platform; the big data platform completes information storage and interaction, obtains operation behavior information of the drive-by-wire chassis through the data analysis module, and transmits the operation behavior information to the supervision platform; the collaborative control module of the supervision platform generates the local optimal solution according to the information transmitted by the big data platform, and transmits the local optimal solution to the drive-by-wire chassis; and the basic control module of the drive-by-wire chassis forms the optimal traveling strategy according to the local optimal solution, and the execution module controls the vehicle according to the optimal traveling strategy. As shown inFIG.3, the present embodiment further provides a control method of a drive-by-wire chassis cyber-physical system under an intelligent traffic environment based on the above system. Specific steps are as follows:1) An operation signal is issued by the driver, and the operation signal sent by the driver includes: steering, braking, driving and composite operation signals, wherein the composite operation signal is a combination of steering and braking or driving.2) Information of the environment perception module of the drive-by-wire chassis and sensor information of the driver input module are obtained. In step 2), a current steering wheel angle and torque, a wheel steering angle, a brake pedal stroke, and an accelerator pedal stroke of a vehicle are obtained through a sensor, and information outside the vehicle, information about a road condition ahead, position information, vehicle-to-vehicle communication information, and vehicle-to-base station communication information under the current working condition are obtained through a detection device, a positioning device and a communication device in the environment perception module.3) A driver operation is judged by the basic control module according to the sensor information of the driver input module, and driver operation information and the information of the environment perception module are transmitted to the supervision platform. The driver operation includes: steering, braking, driving and composite operations, wherein the composite operation includes a combination of steering and braking or driving.4) Real-time monitoring and diagnosis on information of the drive-by-wire chassis is performed by the supervision platform, and whether a current driver operation conforms to a current working condition is judged; if yes, the information of the driver operation and the environment perception module obtained by the supervision platform are transmitted to the big data platform; and if not, the driver operation is adjusted by the supervision platform according to the information of the environment perception module to conform to the current working condition, and the information of the environment perception module and the adjusted driver operation information are transmitted to the big data platform. In step 4), if a supervision platform of any road fails, a supervision platform of any other road takes over data information of the failed supervision platform to perform real-time monitoring and diagnosis on drive-by-wire chassis in a current road and a road corresponding to the failed supervision platform to ensure stability of the traffic environment information. The current working condition of step 4) includes a steering working condition, a braking working condition, an acceleration working condition, and a combined working condition of the steering working condition and the braking working condition or the acceleration working condition, wherein the steering working condition includes passing through a curve, overtaking and lane changing; the braking working condition includes deceleration of a vehicle ahead and a distance from the vehicle ahead being less than a traffic safety distance, emergency obstacle avoidance parking, and passing through a speed limit road section when the vehicle speed is higher than the speed limit; and the acceleration working condition includes vehicle starting, passing through the speed limit section when the vehicle speed is lower than the speed limit, and overtaking.5) The operation information transmitted by the supervision platform is stored by the big data platform.6) The driver operation information transmitted by the supervision platform is analyzed by the big data platform; if an analysis result of the driver operation information in the supervision platform is an ideal driving operation, the driver operation information is fed back to the supervision platform; and if there is an error between the analysis result of the driver operation information and the ideal driving operation information, the ideal driving operation information obtained by data analysis is fed back to the supervision platform. In step 6), if there is an error between the current driver operation and the ideal driving operation, the steering, braking, and driving control units are controlled by the central control unit to drive the hub motor, the steering execution motor and the brake motor to output additional control quantities to minimize the error between the driver operation and the ideal driving operation, wherein a control algorithm used is an H∞ feedback control algorithm, referring toFIG.4, which specifically includes the following contents:61) expressing a deviation between a steering wheel angle θswoutput by the driver and an ideal steering wheel angle θsw* as e1; expressing a deviation between a brake pedal opening degree p output by the driver and an ideal brake pedal opening degree p* as e2; and expressing a deviation between an accelerator pedal opening degree q output by the driver and an ideal accelerator pedal opening degree q* as e3;62) the deviations e1, e2and e3being input of an H∞ feedback controller K(s), calculating, by the feedback controller K(s), additional steering angles θ1, θ2and θ3needing to be output by the steering execution motor, the brake motor and the hub motor according to the input deviation e1, e2and e3, and then controlling, by the central control unit, the steering control unit, the braking control unit and the driving control unit respectively, the steering execution motor, the brake motor and the hub motor to output the corresponding additional steering angles θ1, θ2and θ3;63) enabling the additional steering angles θ1, θ2and θ3to act on a drive-by-wire chassis system, and then affect a traveling state of the vehicle, and meanwhile, the performing, by the driver, a corresponding driving operation according to a current vehicle state so as to obtain a new set of deviations e4, e5and e6; and64) repeating steps 61)-63) until the deviations ei(i=1, 2, 3, . . . ) are eliminated.7) A real-time local optimal solution for the vehicle is formed by the supervision platform according to the feedback information, and fee back the local optimal solution to a drive-by-wire chassis supervised by the current supervision platform.8) An optimal traveling strategy corresponding to the local optimal solution is generated by the central control unit, and transmitted to the steering control unit, the braking control unit and the driving control unit to control an output current of the motor in an execution module, so that a controller controls other execution mechanisms in the execution module to complete output to the vehicle. The motor in step 8) includes the steering execution motor, the brake motor and the hub motor; the controller includes the steering controller, the brake controller, and the driving controller, and the other execution mechanisms are execution mechanisms in the execution module except for the execution motor, the brake motor, the hub motor, the steering controller, the brake controller, and the driving controller. In accordance with the aspects of the present invention, the basic control module, the execution module, environment perception module, collaborative control module, the real-time monitoring and diagnosis module, execution module, data interaction module, together with controllers and units, are considered as one or more computer processors, capable of executing a program, strategy, or algorithm thereon. The present disclosure has many specific application ways. The above mentioned is only preferred implementations of the present disclosure. It should be noted that those skilled in the art can further make various improvements without departing from the principle of the present disclosure, and these improvements should also be regarded as the protection scope of the present disclosure. | 20,699 |
11858526 | DETAILED DESCRIPTION OF THE DRAWINGS In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the described exemplary embodiments. It will be apparent, however, to one skilled in the art that the described embodiments can be practiced without some or all of these specific details. In other exemplary embodiments, well known structures or process steps have not been described in detail in order to avoid unnecessarily obscuring the concept of the present disclosure. The term “vehicle” used throughout the specification refers to a motor vehicle which comprises but is not limited to a car, a truck, a bus, or the like. The term “A and/or B” used throughout the specification refers to “A”, “B”, or “A and B”, and the term “A or B” used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified. Referring first toFIG.1, there is shown a block diagram of an apparatus100for use with a vehicle110in accordance with one or more exemplary embodiments of the present disclosure. The apparatus100comprises one or more cameras101located in the vehicle110, and a controller102communicated with the cameras101. One or more of the cameras101may be used to detect a figure and/or a behavior of a user sat in the vehicle110, including the driver and the passengers. In some embodiments, the camera(s)101can capture images and/or videos for any user, so that a figure and/or a behavior of the user can be obtained from the captured images and/or videos. In some cases, additionally the camera(s)101may have an infrared imaging/sensing function. In some embodiments, the figure of the user may show the body shape of the user, including the eye position, which can be used to estimate a position of a component of the vehicle110suitable for the user. In some embodiments, the detected behavior may comprise an eye movement, a gaze direction, a facial expression, a body motion or the like. The figure and/or behavior of the user may be obtained at the camera(s)101side or the controller102side. In other words, the figure and/or behavior of the user may be extracted from the captured data of the camera(s)101by the camera(s)101or by the controller102. In some embodiments, the camera(s)101may comprise at least one of: a driver surveillance camera, a camera contained in a driver assistance system, a camera contained in a mobile phone, a camera contained in a laptop or a camera mounted in a rear-seat display. For example, the cameras already mounted for driver surveillance/assistance in the vehicle can also serve as the camera(s)101of the present disclosure. In this case, the cameras can detect at least the figure and/or behavior of the driver. In some embodiments, the camera contained in a mobile phone, a laptop or a rear-seat display may be used to detect the figure and/or behavior of the passenger or the driver when he/she is not driving. In other embodiments, the camera(s)101may be camera(s) dedicated for implementing the present invention, which may be extra mounted within the vehicle110. The detected figure and/or behavior of the user is obtained by the controller102. Then, the controller102optimizes one or more components of the vehicle110for the user, based on the detected figure and/or behavior of the user. The specific operations performed by the controller102will be described in details later. The controller102may be a processor, a microprocessor or the like. The controller102may be provided on the vehicle110, for example, at the central console of the vehicle110, or integrated into the central console. Alternatively, the controller102may be provided remotely and may be accessed via various networks or the like. As shown inFIG.1, in some embodiments, the vehicle110may comprise various controllable components, including display(s)111(such as a dashboard, a central information display, a Head-Up Display (HUD) or a rear-seat display), mirrors112(e.g., a rear-view mirror, or side-view mirrors), seats113, a steering wheel114, or the like, which can be optimized by the controller102for the user. It will be apparent to those skilled in the art that, these components are listed here only for illustrative purpose rather than limiting the present disclosure. In some embodiments, as shown inFIG.1, the camera(s)101and the controller102both may be communicated with various components of the vehicle110, although the communication between the camera(s)101and the vehicle110is not necessary for the present invention. InFIG.1, a bi-directional arrow between the components represents a communication path therebetween, which may be a direct connection via tangible wire(s) or in a wireless way (such as via radio, RF, or the like). In other embodiments, the communication between the components may be established indirectly, for example, via a network (not shown) or other intermediate component (such as a relay component). In some cases that the communications are established via a network, the network may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), a virtual network, a telecommunications network, and/or other interconnected paths across which multiple entities may communicate. In some embodiments, the network includes Bluetooth® communication networks or a cellular communications network for sending and receiving data via e.g. short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, WAP, e-mail, etc. In other embodiments, the network may be a mobile data network such as CDMA, GPRS, TDMA, GSM, WIMAX, 3G, 4G, LTE, VoLTE, or any other mobile data network or combination of mobile data networks. The features, types, numbers, and locations of the camera(s)101, the controller102and the components of the vehicle110as well as the communications therebetween have been described in detail. But as can be easily understood by those skilled in the art, the features, types, numbers, and locations of the above components are not limited to the illustrated embodiments, but can be adapted/altered according to the actual requirements. As described above, one or more components of the vehicle can be optimized/customized/personalized for the user automatically by utilizing the cameras in the vehicle, that is to say, the vehicle can behave more intelligently to the user, and thus the user experience can be improved. Additionally, in the case that the cameras already existing in the vehicle serve as the camera(s)101of the present disclosure, the cameras can be utilized more efficiently. Next, the operations of the controller102will be described in detail. Referring toFIG.2, it illustrates a flow chart showing a method200for use with a vehicle in accordance with one or more exemplary embodiments of the present disclosure. It should be understood by those skilled in the art that, the method200, as well as methods300,500,600and700as will be described below with reference toFIGS.3and5-7, may be performed by e.g. the above-described controller102ofFIG.1, or other apparatus. The steps of the methods200,300,500,600and700presented below are intended to be illustrative. In some embodiments, these methods may be accomplished with one or more additional steps not described, and/or without one or more of the steps discussed. Additionally, in some embodiments, the methods may be implemented in one or more processing devices. The one or more processing devices may include one or more modules executing some or all of the steps of methods in response to instructions stored electronically on an electronic storage medium. The one or more processing modules may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the steps of methods. As shown inFIG.2, at step210, a figure and/or a behavior of a user sat in the vehicle is obtained. The figure and/or behavior is detected by using one or more cameras located in the vehicle. At step220, one or more components of the vehicle are optimized for the user, based on the detected figure and/or behavior of the user. In some implementations of the above method200, the step210may comprise: obtaining a figure of a driver of the vehicle, which is detected by using the one or more cameras, and the step220may comprise: controlling to adjust a position of at least one of the components so as to facilitate the driver to operate the vehicle, based on the detected figure of the driver. For example, the method200may be specifically implemented as method300shown inFIG.3. FIG.3illustrates a flow chart of a specific example of the method200. In this example, the user to be detected is a driver of the vehicle. As shown inFIG.3, at step310, a figure of the driver is obtained. The figure is detected by using one or more cameras located in the vehicle. At step320, at least one of the components of the vehicle is controlled to adjust its position so as to facilitate the driver to operate the vehicle, based on the detected figure of the driver. In some cases, the figure of the driver may comprise an eye position of the driver, and the position may be adjusted based on the detected eye position of the driver. An example implementation of the method300is shown inFIG.4. InFIG.4, when the driver sits in the vehicle without any setting and before he drives the vehicle, some components of the vehicle can be adjusted automatically so as to facilitate the driver to operate the vehicle. Before adjusting the components, the vehicle may ask for the driver's permission via a visual component or an audio component, as shown inFIG.4. For example, the vehicle may display or voice “Hello! Do you want to automatically configure your cockpit?”, and the driver may choice or answer “Yes, gladly.” Then the method300starts. The camera(s)101detects the figure of the driver including the eye position, and the positions of the HUD111, the side-view mirror112, the driver' seat113and the steering wheel114can be automatically adjusted to the best place in directions as indicated by the arrows inFIG.4, based on the detected figure of the driver. For example, the positions of the HUD111and the side-view mirror112can be adjusted based on the detected eye position. In some implementations, the step210may comprise: obtaining a behavior of a user of the vehicle, which is detected by using the one or more cameras, and the step220may comprise: controlling to present a content to the user via the one or more components, based on the detected behavior of the user. For example, the method200may be specifically implemented as method500shown inFIG.5. FIG.5illustrates a flow chart of a specific example of the method200. In this example, the user to be detected can be a driver or a passenger. As shown inFIG.5, at step510, a behavior of the user is obtained. The behavior is detected by using one or more cameras located in the vehicle. At step520, the one or more components are controlled to present a content to the user via the one or more components, based on the detected behavior of the user. The one or more components may be controlled by the controller102ofFIG.1directly or indirectly (e.g., via an intermediate component like an actuator). In some cases that the content to be presented is a driving safety message and the user is a driver of the vehicle, the detected behavior may indicate whether the driver is watching a display of the vehicle or not. Accordingly, in the step520, in response to the detected behavior indicating the driver is watching a display of the vehicle, the watched display is controlled to display the driving safety message, and in response to the detected behavior indicating the driver is not watching any display of the vehicle, other component(s) than display(s)111is controlled to present the driving safety message to the user via an audio output, a haptic output, and/or an odor output. For example, when a camera disposed outside the vehicle detects a limit speed from e.g. road traffic signs, which is lower than the real-time speed of the vehicle, there is a need to warn the driver by presenting a driving safety message to he/she, such as displaying/voicing “Please slow down!” or “Overspeed!”, issuing a particular sound, vibrating the steering wheel or the seat, releasing a particular odor and/or the like. The channel for presenting the driving safety message can be determined based on the detected behavior of the driver. For example, in the case of detecting the driver is watching a display e.g. a dashboard typically behind the steering wheel, the driving safety message may be displayed on the dashboard; otherwise, the driving safety message may be presented to the user via other channel. Thus, it is ensured for the driver to instantly follow the driving safety message, and the safety during the driving can be improved. In some implementations of the above method500, the step510may comprise: obtaining, over a time period, behaviors of the user at the time when watching contents displayed on one or more displays of the vehicle, which are detected by using the one or more cameras, and the step520may comprise: obtaining preference of the user based on the detected behaviors of the user associated with their corresponding watched contents; and optimizing one or more of the components to present a content for the user, based on the preference of the user. For example, the method500may be specifically implemented as method600shown inFIG.6. FIG.6illustrates a flow chart of a specific example of the method500. In this example, the user to be detected may be a driver or a passenger. As shown inFIG.6, at step610, behaviors of the user at the time when watching contents displayed on one or more displays of the vehicle are obtained over a time period. The behaviors are detected by using the camera(s)101. In some cases, the time period may be a plurality of minutes, hours, days, months or years. In other cases, the time period may comprise all the past. In some embodiments, the watched contents may comprise at least one of driving information, a text, a logo, an icon, a graphic, a movie, a list, a news, a navigation address, a message, or a phone number. In some embodiments, the detected behaviors may comprise an eye movement, a gaze direction, a facial expression, a body motion or the like. For example, the camera(s)101can detect where the user looks and how, e.g., the time spent on the content by the user, the facial expression or body motion of the user when watching the content. In addition, the camera(s)101can track/identify eye movements or gaze directions typical for recognizing a logo or reading text. The sentiment of the user when watching the content, e.g., boredom, interest, excitement, laughter, anger, surprise, can be obtained by analyzing the facial expression and/or body motion (such as the motion of head, shoulder, arms or the like). At step621, preference of the user is obtained based on the detected behaviors of the user associated with their corresponding watched contents. In some embodiments, the preference can be reflected by a user preference model established by artificial intelligence, such as a machine learning approach, based on the detected behaviors associated with the watched contents. In some embodiments, the preference of the user may comprise: preferred content, preferred type of content, content of interest, type of content of interest, preferred presentation format, preferred presentation channel or the like. In some embodiments, the preference of the user may be obtained by: extracting information regarding the respective watched contents from the behaviors of the user associated with their corresponding contents, the information including at least one of: time spent on the content by the user, sentiment of the user when watching the content, classifying information of the content, a transitionary effect for displaying the content (e.g. content sliding graphically), an interesting portion in the content, metadata of the content, or a key word of the content; and obtaining the preference of the user based on the extracted information. At step622, the components of the vehicle are optimized to present a content for the user, based on the preference of the user. In some embodiments, the optimizing step622may comprise at least one of following steps622-1,622-2,622-3and622-4:Step622-1: selecting one or more preferred contents to be displayed on one of the displays from candidate contents, based on the preference of the user;Step622-2: recommending one or more preferred contents to be displayed on one of the displays, based on the preference of the user;Step622-3: determining a displaying format (e.g., how to display, summarized text or full text, more/less graphics, or the like) for a content to be displayed on one of the displays, based on the preference of the user;Step622-4: selecting at least one presentation channels for presenting a content to the user, based on the preference of the user, wherein the presentation channels are configured to provide at least one of a visual output, an audio output, a haptic output, or an odor output for the content. For example, from the past detected behaviors of a driver, it can be learned that, when watching the dashboard behind the steering wheel, he/she is most likely to watch the speed. Thus, once detecting that the driver becomes watching the dashboard, the speed may be highlighted, enlarged or displayed in other eye-catching way on the dashboard. In other embodiments, from the past detected behaviors of the driver, it can be learned which of presenting manners/channels is the most effective for the driver, for example, in which presenting manner/channel, the time spent by the driver on the content is shortest or the response of the driver to the content is fastest. Then, the driving information, the driving safety message and other kinds of safety-related information can be presented in the most effective way to the driver, which may improve the safety. For another example, when the user is browsing through several pieces of news shown on a display, the camera can detect the user's gaze directions, eye movements, facial expressions, body motions, and/or the like. From these behaviors detected over a time period, some reactions of the user to the news can be extracted, e.g., the time spent on each piece of the news by the user, the sentiment of the user when watching each piece of the news, and so on. Combining the extracted reactions with the information (e.g., classifying information, metadata, key word) of the corresponding news, the preference (e.g., the interest) of the user can be obtained. Then, news to be displayed for the user will be optimized so as to match the user's interest. For example, from the behaviors detected by the camera over several days/months or a specified time period, it can be learned that, the user spent the most time on sports news, i.e., the user is most interested in sports news. Then in future sports news will be displayed on top of the display, will be recommended to the user, will be displayed to the user in the most eye-catching way, or the like. In view of the above, the attractiveness/importance of new potential content to the user can be predicted, more relevant content can be displayed on the display, and/or new content can be presented to the user in his/her preferred or effective way, given the past detected behaviors associated with the contents. Thus, the user experience can be improved, and in some applications, the safety can also be improved. In some implementations of the above method600, the user is a driver of the vehicle, and the content to be presented is the driving safety message. The method600may further comprise: obtaining a behavior indicating whether the driver is watching one of the displays of the vehicle or not at the time when a driving safety message is obtained, which is detected by using the one or more cameras. The step622may be implemented as: in response to the behavior indicating the driver is watching one of the displays of the vehicle, controlling to display the driving safety message on the watched display in a manner optimized for the driver based on the preference of the driver; and in response to the behavior indicating the driver is not watching any display of the vehicle, controlling to present the driving safety message to the driver via an audio output, a haptic output, and/or an odor output from one or more of the components, based on the preference of the driver. For example, the method600may be specifically implemented as method700shown inFIG.7. FIG.7illustrates a flow chart of a specific example of the method600. In this example, the user to be detected is a driver of the vehicle, and the content to be presented is a driving safety message. As shown inFIG.7, at step710, like the above-mentioned step610, behaviors of the driver at the time when watching contents displayed on one or more displays of the vehicle are obtained over a time period. At step721, like the above-mentioned step621, preference of the driver is obtained based on the detected behaviors associated with their corresponding watched contents. At step730, a driving safety message is obtained and is intended to be presented to the driver. At step740, a behavior of the driver is detected by using the one or more cameras at the time when the driving safety message is obtained, the behavior indicating whether the driver is watching one of the displays of the vehicle or not. If “yes” at step740, proceed to step722, which controls to display the driving safety message on the watched display in a manner optimized for the driver based on the preference of the driver. If “no” at step740, proceed to step723, which controls to present the driving safety message to the driver via an audio output, a haptic output, and/or an odor output from one or more of the components in an optimized manner, based on the preference of the driver. In some embodiments, the driving safety message to be presented may be the one discussed in the example in relation to the method500ofFIG.5, i.e., the warning message regarding overspeed. The channel and/or manner for presenting the driving safety message can be determined based on the behavior of the driver detected when obtaining the message and the preference of the user. For example, in the case of detecting the driver is watching a display, the driving safety message may be displayed on the watched display in a highlighted way or other preferred/effective way; otherwise, the driving safety message may be presented in other particularly preferred/effective manner for the driver. Thus, the safety during the driving can be improved while the user experience can be improved. In some embodiments, before the optimizing operation, the user's identity (e.g. who is in the driver seat) is detected to ensure personalization. In some cases, the above methods200-300and500-700may further comprise, before the step210and alternative steps310and510-710respectively: determining the identity of the user by using the one or more cameras. Then, the above-discussed optimizing step may comprise: optimizing the one or more components of the vehicle for the user, based on the detected figure and/or behavior of the user associated with the identity of the user. For example, the camera(s)101can be used to automatically detect the facial feature, the iris feature or the like of the user, and then the controller102can determine the identity of the user based on these features. Combining with the identity of the user, the personalization of the optimizing operation can be ensured, and it will be especially useful for the applications in which there are more than one driver alternatively operating one vehicle. In some embodiments, there are many kinds of displays disposed in a vehicle, for example, a dashboard typically behind the steering wheel, a central information display typically in the center console, a HUD and/or a rear-seat display, one or more of which can serve as the displays as discussed above. In some embodiments, the displays may comprise one or more of a flat display, a curved display, a flexible display, projection display or the like. It will be apparent to those skilled in the art that the present disclosure is not limited to the above-listed displays, but can be any type of displays. Please note that, the orders in which the steps of methods200,300,500,600and700are illustrated inFIGS.2-3and5-7respectively and described as above are intended to be illustrative only and non-limiting, unless specifically stated otherwise. Please also note that, the details discussed in one of the above embodiments can also be applied to other embodiments, and the above embodiments can be combined arbitrarily, unless specifically stated otherwise. FIG.8illustrates a block diagram of an apparatus800for use with a vehicle (e.g., the controller102as shown inFIG.1) in accordance with an exemplary embodiment of the present disclosure. The blocks of the apparatus800may be implemented by hardware, software, firmware, or any combination thereof to carry out the principles of the present disclosure. It is understood by those skilled in the art that the blocks described inFIG.8may be combined or separated into sub-blocks to implement the principles of the present disclosure as described above. Therefore, the description herein may support any possible combination or separation or further definition of the blocks described herein. Referring toFIG.8, the apparatus800may comprise: an obtaining unit801for obtaining a figure and/or a behavior of a user sat in the vehicle, which is detected by using one or more cameras located in the vehicle; and an optimizing unit802for optimizing one or more components of the vehicle for the user, based on the detected figure and/or behavior of the user. Please note that, the respective units in the apparatus800can be configured to perform the respective operations as discussed above in the methods200,300,500,600and700respectively shown inFIGS.2-3and5-7, and thus their details are omitted here. Furthermore, the apparatus800may comprise additional units (not shown) for performing the steps as discussed above in the methods200,300,500,600and700, if needed. FIG.9illustrates a general computing device2000wherein the present disclosure is applicable in accordance with one or more exemplary embodiments of the present disclosure. With reference toFIG.9, a computing device2000, which is an example of the hardware device that may be applied to the aspects of the present disclosure, will now be described. The computing device2000may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof. The aforementioned controller102, or the apparatus800for use with the vehicle may be wholly or at least partially implemented by the computing device2000or a similar device or system. The computing device2000may comprise elements that are connected with or in communication with a bus2002, possibly via one or more interfaces. For example, the computing device2000may comprise the bus2002, one or more processors2004, one or more input devices2006and one or more output devices2008. The one or more processors2004may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips). The input devices2006may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control. The output devices2008may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer. The computing device2000may also comprise or be connected with non-transitory storage devices2010which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory), a RAM (Random Access Memory), a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code. The non-transitory storage devices2010may be detachable from an interface. The non-transitory storage devices2010may have data/instructions/code for implementing the methods and steps which are described above. The computing device2000may also comprise a communication device2012. The communication device2012may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a Bluetooth™ device, 1302.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like. The transmitter/receiver/communication device as aforementioned may, for example, be implemented by the communication device2012. When the computing device2000is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on. In this way, the computing device2000may, for example, receive location data and sensor data indicating the travelling situation of the vehicle. When the computing device2000is used as an on-vehicle device, it may also be connected to other facilities (such as an engine system, a wiper, an anti-lock Braking System or the like) for controlling the traveling and operation of the vehicle. In addition, the non-transitory storage device2010may have map information and software elements so that the processor2004may perform route guidance processing. In addition, the output device2006may comprise a display for displaying the map, the location mark of the vehicle, images indicating the travelling situation of the vehicle and also the visual signals. The output device2006may also comprise a speaker for audio output. The bus2002may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus2002may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile. The computing device2000may also comprise a working memory2014, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor2004, and may comprise but is not limited to a random access memory and/or a read-only memory device. Software elements may be located in the working memory2014, including but are not limited to an operating system2016, one or more application programs2018, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs2018, and the units of the aforementioned controller102, or the apparatus800for use with the vehicle may be implemented by the processor2004reading and executing the instructions of the one or more application programs2018. More specifically, the aforementioned controller102, or the apparatus800for use with the vehicle may, for example, be implemented by the processor2004when executing an application2018having instructions to perform the steps of the above-mentioned methods. In addition, the obtaining unit801of the aforementioned apparatus800may, for example, be implemented by the processor2004when executing an application2018having instructions to perform the step210of the method ofFIG.2. Other units of the aforementioned apparatus800may also, for example, be implemented by the processor2004when executing an application2018having instructions to perform one or more of the aforementioned respective steps. The executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device(s)2010described above, and may be read into the working memory2014possibly with compilation and/or installation. The executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location. It should further be understood that the components of computing device2000can be distributed across a network. For example, some processing may be performed using one processor while other processing may be performed by another processor remote from the one processor. Other components of computing system2000may also be similarly distributed. As such, computing device2000may be interpreted as a distributed computing system that performs processing in multiple locations. Although some specific embodiments of the present invention have been demonstrated in detail with examples, it should be understood by a person skilled in the art that the above examples are only intended to be illustrative but not to limit the scope of the present invention. Various combinations of the aspects/embodiments in the specification shall be contained in the protection scope of the present invention. It should be understood by a person skilled in the art that the above embodiments can be modified without departing from the scope and spirit of the present invention. The scope of the present invention is defined by the attached claims. | 34,379 |
11858527 | DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. Note that the following embodiments do not limit the invention according to the claims, and not all combinations of features described in the embodiments are essential to the invention. Two or more of a plurality of the features described in the embodiments may be optionally combined together. In addition, the same or similar constituent elements are denoted by the same reference numerals, and overlapping descriptions will be omitted. First Embodiment [Vehicle Configuration] FIG.1is a block diagram of a vehicle control device according to an embodiment of the present disclosure, which controls a vehicle1. InFIG.1, the vehicle1is schematically illustrated in a plan view and a side view. The vehicle1is, for example, a sedan-type four-wheeled passenger vehicle. The control device ofFIG.1includes a control system2. The control system2includes a plurality of electronic control units (ECUs)20to29communicably connected by an in-vehicle network. Each ECU functions as a computer that includes a processor represented by a central processing unit (CPU), a storage device such as a semiconductor memory or the like, an interface with an external device, and the like. The storage device stores programs executed by the processor, data used for processing by the processor, and the like. Each ECU may include a plurality of processors, storage devices, interfaces, and the like. Hereinafter, functions and the like assigned to each of the ECUs20to29will be described. Note that the number of ECUs and the functions assigned to the ECUs can be designed as appropriate and can be subdivided or integrated compared with the present embodiment. The ECU20executes control related to automated driving of the vehicle1. In automated driving, at least one of the steering and acceleration/deceleration of the vehicle1is automatically controlled. In an example of control described later, both steering and acceleration/deceleration are controlled automatically. The ECU21controls an electric power steering device3. The electric power steering device3includes a mechanism that steers a front wheel in accordance with a driver's driving operation (steering operation) on a steering wheel31. In addition, the electric power steering device3includes a motor that exerts a driving force for assisting the steering operation and automatically steering the front wheel, a sensor that detects a steering angle, and the like. When the driving state of the vehicle1is automated driving, the ECU21automatically controls the electric power steering device3in response to an instruction from the ECU20and controls a direction of advance of the vehicle1. The ECUs22and23perform control of detection units41to43that detect the surrounding situation of the vehicle and information processing of the detection result. The detection unit41is a camera that captures an image of the front of the vehicle1(hereinafter, it may be referred to as a camera41) and is attached to the vehicle interior side of the windshield at the front of the roof of the vehicle1in the present embodiment. By analyzing the image captured by the camera41, it is possible to extract a contour of an object or extract a division line (white line or the like) of a lane on a road. The detection unit42is a light detection and ranging (lidar) (hereinafter, it may be referred to as a lidar42), detects an object around the vehicle1, measures a distance to the object, and the like. In the present embodiment, five lidars42are provided, one at each corner portion of a front portion of the vehicle1, one at the center of a rear portion of the vehicle1, and one at each side of the rear portion of the vehicle1. The detection unit43is a millimeter-wave radar (hereinafter, it may be referred to as a radar43), detects an object around the vehicle1, and measures a distance to the object. In the present embodiment, five radars43are provided, one at the center of the front portion of the vehicle1, one at each corner portion of the front portion of the vehicle1, and one at each corner portion of the rear portion of the vehicle1. The ECU22controls one camera41and each lidar42and executes information processing on the detection result. The ECU23controls the other camera41and each of the radars43and executes information processing on the detection result. Since two sets of devices for detecting the surrounding situation of the vehicle are provided, the reliability of the detection result can be improved, and since different types of detection units such as a camera, lidar, and radar are provided, the surrounding environment of the vehicle can be analyzed in multiple ways. The ECU24controls a gyro sensor5, a global positioning system (GPS) sensor24b, and a communication device24cand executes information processing on a detection result or a communication result. The gyro sensor5detects a rotational motion of the vehicle1. The course of the vehicle1can be determined based on the detection result of the gyro sensor5, the wheel speed, and the like. The GPS sensor24bdetects the current position of the vehicle1. The communication device24cperforms wireless communication with a server that provides map information and traffic information and acquires these pieces of information. The ECU24can access a map information database24aconstructed in the storage device, and the ECU24performs a search for a route from the current position to a destination and the like. The ECU25includes a communication device25afor vehicle-to-vehicle communication. The communication device25aperforms wireless communication with other surrounding vehicles to exchange information between the vehicles. The ECU26controls a power plant6. The power plant6is a mechanism that outputs a driving force for rotating driving wheels of the vehicle1and includes, for example, an engine and a transmission. For example, the ECU26controls the output of the engine according to the driving operation (accelerator operation or acceleration operation) of the driver detected by an operation detection sensor7aprovided on an accelerator pedal7A and switches the gear ratio of the transmission based on information such as the vehicle speed detected by a vehicle speed sensor7cand the like. When the driving state of the vehicle1is automated driving, the ECU26automatically controls the power plant6in response to an instruction from the ECU20and controls the acceleration/deceleration of the vehicle1. The ECU27controls a light device (headlight, tail light, and the like) including a direction indicator8(blinker). In the example ofFIG.1, the direction indicators8are provided at the front portion, the door mirror, and the rear portion of the vehicle1. The ECU28controls an input/output device9. The input/output device9outputs information to the driver and receives an input of information from the driver. A sound output device91notifies the driver of information by sound. A display device92notifies the driver of information by displaying an image. The display device92is arranged, for example, in front of a driver's seat and constitutes an instrument panel or the like. Note that, although the sound and the image display have been exemplified here, it is also possible to report information by using vibration or light. In addition, it is also possible to report information by using a combination of some of the sound, image display, vibration, and light. Furthermore, it is also possible to change the combination or the notification mode in accordance with the level (for example, the degree of urgency) of information to be reported. An input device93is a switch group that is arranged at a position where the driver can operate it and is used to input an instruction to the vehicle1. The input device93may also include a voice input device. The ECU29controls a brake device10and a parking brake (not illustrated in the drawings). The brake device10is, for example, a disc brake device, and is provided on each wheel of the vehicle1to decelerate or stop the vehicle1by applying resistance to the rotation of the wheel. The ECU29controls the operation of the brake device10in response to the driver's driving operation (brake operation) detected by an operation detection sensor7bprovided on a brake pedal7B, for example. When the driving state of the vehicle1is automated driving, the ECU29automatically controls the brake device10in response to an instruction from the ECU20and controls the deceleration and stop of the vehicle1. The brake device10and the parking brake can also operate to maintain a stopped state of the vehicle1. In addition, in a case where the transmission of the power plant6includes a parking lock mechanism, the parking lock mechanism can also be operated to maintain the stopped state of the vehicle1. Example of Control Function The control function of the vehicle1according to the present embodiment includes a travel-related function related to control of driving, braking, and steering of the vehicle1, and a notification function related to notifying the driver of information. Note that each control function may be provided with a plurality of control levels in accordance with the performance or the like of the vehicle1. Examples of the travel-related functions include vehicle speed keeping control, acceleration/deceleration timing control, lane-keeping control, lane deviation suppression control (out-of-road deviation suppression control), lane change control, preceding vehicle following control, collision reduction brake control, and erroneous start suppression control. Examples of the notification function include adjacent vehicle notification control, preceding vehicle start notification control, and taking over driving request notification control. The vehicle speed keeping control is a control of keeping traveling at a predetermined vehicle speed. For example, an accelerator or a brake is controlled in order to keep a vehicle speed in accordance with a shape of a road on which the vehicle is traveling or a change in an external environment. The acceleration/deceleration timing control is a control of determining the timing of acceleration/deceleration of the vehicle based on a traveling state of the vehicle, transition to another operation, and the like. For example, even in a case where the same operation is performed in accordance with a curvature of a curve, a road shape, the travel position, and the like, the timing of acceleration/deceleration is different, and thus, these timings are controlled. In addition, vehicle speed control is performed such that the vehicle speed approaches a target traveling speed by combining vehicle speed keeping control and acceleration/deceleration timing control. The lane-keeping control is one of the controls of the position of the vehicle with respect to a lane and is the control of causing the vehicle to automatically travel on the travel trajectory set in the lane (without depending on the driving operation of the driver). The lane deviation suppression control is one of the controls of the position of the vehicle with respect to the lane, and detects a white line or a traveling road boundary (median strip, planting (lawn), curbs, and the like) and automatically performs steering control so that the vehicle does not exceed the line. The lane deviation suppression control and the lane-keeping control have different functions as described above. The lane change control is a control of automatically moving a vehicle from a lane in which the vehicle is traveling to an adjacent lane. By repeating the lane change control, it is also possible to move across a plurality of lanes or to return to the original lane after temporarily changing the lane to the adjacent lane. The preceding vehicle following control is a control of automatically following another vehicle traveling in front of the self-vehicle and performing traveling. The collision reduction brake control is a control of assisting to avoid collision by automatically braking in a case where the possibility of collision with an obstacle in front of the vehicle increases. The erroneous start suppression control is a control that limits the acceleration of the vehicle in a case where the acceleration operation by the driver is equal to or more than a predetermined amount in a stopped state of the vehicle and suppresses sudden start. The adjacent vehicle notification control is a control of notifying the driver of the presence of another vehicle traveling in the adjacent lane adjacent to the traveling lane of the self-vehicle, and for example, notifies the driver of the presence of another vehicle traveling on the side or the rear of the self-vehicle. The preceding vehicle start notification control is a control of reporting that the self-vehicle and another vehicle ahead of the self-vehicle are in a stopped state and the other vehicle ahead of the self-vehicle has started. The taking over driving request notification control is, for example, a control of requesting an operation to the driver (passenger) before and after a change when the traveling mode of the vehicle1changes. Since the content of the operation requested to the driver varies in accordance with the traveling mode, the notification content and the timing of notification may change in accordance with when the content of the operation requested before and after the change. These notifications can be performed by an in-vehicle notification device. [Outline of Operation] An outline of control of the vehicle according to the present embodiment will be described with reference toFIG.2. As examples of the traveling mode, a first traveling mode in which an operation (gripping steering, monitoring surroundings, and the like) by the driver is not requested while the vehicle1is traveling and a second traveling mode in which an operation by the driver is requested will be described as examples. Note that the operation requested in the second traveling mode is not particularly limited. In addition, in the first traveling mode, it is not limited to a state in which the driver's operation is not accepted, and the driver's operation may be accepted as required. It is assumed that the vehicle1according to the present embodiment can transition from the first traveling mode to the second traveling mode and transition from the second traveling mode to the first traveling mode. A condition of the mode transition is defined in advance. When the mode transition is performed, the driver is notified of the mode transition. For example, at the time of transition from the first traveling mode to the second traveling mode, a notification is given to the driver, and in a case where the driver performs a predetermined action or operation in response to the notification, the traveling mode transitions to the second traveling mode. As described above, the appropriate timing of notification of the mode transition (taking over driving) to the driver varies in accordance with the travel environment, the state of the driver, and the like. That is, in a case where the timing of notification at the time of the transition of the traveling mode is fixed, the timing of taking over driving is limited depending on the situation at that time, and the convenience of the operation by the user deteriorates. Therefore, in the present embodiment, the timing of notification at the time of mode transition is controlled in consideration of the traveling situation. FIG.2is a diagram for explaining a correspondence relationship between an upper limit speed and a timing of notification according to the present embodiment. The upper limit speed according to the present embodiment indicates the upper limit value of the traveling speed set in the state of traveling in the first traveling mode. The upper limit speed is switched in accordance with the surrounding environment during traveling. Examples of an element for switching the upper limit speed include a traveling speed set for the road on which the vehicle is traveling, a road shape, detection accuracy of a surrounding environment, a duration of the first traveling mode (continuous stable traveling), and the like, but not particularly limited thereto. In the present embodiment, a case where the upper limit speed is in three stages (80 kph, 60 kph, and 50 kph) will be described as an example. In addition, the notification intensity illustrated inFIG.2indicates the intensity at the time of performing notification, and is configured in three stages of “strong”, “medium”, and “weak”. The notification content is not particularly limited but is configured such that, for example, the combination and the like of the volume and the notification method is switched, and the driver can easily recognize the request for taking over as the notification intensity increases. In a case where the set upper limit speed is 80 kph, when the mode transitions from the first traveling mode to the second traveling mode, the mode transition is reported to the driver at the timing when the traveling speed of the self-vehicle reaches 65 kph, and a predetermined operation by the driver is requested. That is, in this case, the difference between the upper limit speed and the traveling speed of the self-vehicle is 15 kph. In addition, the intensity related to the notification at this time is set to “strong”. In addition, in a case where the set upper limit speed is 60 kph, when the mode transitions from the first traveling mode to the second traveling mode, the mode transition is reported to the driver at the timing when the traveling speed of the self-vehicle reaches 50 kph, and a predetermined operation by the driver is requested. That is, in this case, the difference between the upper limit speed and the traveling speed of the self-vehicle is 10 kph. In addition, the intensity related to the notification at this time is set to “medium”. In addition, in a case where the set upper limit speed is 50 kph, when the mode transitions from the first traveling mode to the second traveling mode, the mode transition is reported to the driver at the timing when the traveling speed of the self-vehicle reaches 45 kph, and a predetermined operation by the driver is requested. That is, in this case, the difference between the upper limit speed and the traveling speed of the self-vehicle is 5 kph. In addition, the intensity related to the notification at this time is set to “weak”. That is, the timing (vehicle speed of starting to take over driving) at which the notification of the request for taking over at the time of the mode transition is made and the notification intensity thereof are changed in accordance with the set upper limit speed. As the upper limit speed is higher, it tends to take more time for the driver to recognize the surrounding environment and situation at the time of taking over driving. Therefore, in a case where the upper limit speed is high, it is necessary to report to the driver earlier compared to a case where the upper limit speed is low, provide a longer time to the driver before taking over driving, and get the driver prepared for the driving. Therefore, it is effective to increase the difference from the vehicle speed at which starting to take over driving in accordance with the height of the upper limit speed. Note that the traveling speed of the self-vehicle to be the timing of notification of taking over driving illustrated inFIG.2may be the case of having become the traveling speed illustrated inFIG.2by either deceleration or acceleration. Alternatively, either acceleration or deceleration may be performed in accordance with the mode before and after the transition. In addition, inFIG.2, the vehicle speed at which starting to take over driving is set as a threshold (upper limit and lower limit), but the present invention is not limited thereto. For example, the vehicle speed at which starting to take over driving may be set in a range. In the above description, the traveling speed set for the road is described as one of the elements for switching the upper limit speed. The traveling speed set for the road may change in accordance with the situation in addition to the predefined legal speed. For example, there is a case where a speed limit lower than a legal speed is temporarily set in accordance with the occurrence of an accident or bad weather. It is assumed that the temporarily set speed limit can be grasped by, for example, a speed limit sign or the like arranged on an expressway. In addition, the road shape is described as one of the elements for switching the upper limit speed. For example, the upper limit speed may be switched in accordance with the turning curvature in a curve (R shape) or the like of the road. In addition, the detection accuracy of the surrounding environment is described as one of the elements for switching the upper limit speed. For example, the detection accuracy may be based on a state of a road surface, weather, or the like, or may be based on a deterioration state of a detection unit (a sensor or the like) included in the self-vehicle, a change (limitation) in a detection range, or the like. In the present embodiment, a higher upper limit speed is to be set in a case where it is possible to travel stably, such as a case where surrounding information can be appropriately acquired by a detection unit or the like, a case where a shape or undulation of a road on which the vehicle is traveling does not exceed a certain change, and the like. [Processing Flow] A processing flow of control processing according to the present embodiment will be described with reference toFIG.3. In each control of this processing flow, various ECUs and the like included in the vehicle as described above perform processing in cooperation with each other, but here, a processing entity is illustrated as the control system2of the vehicle1in order to simplify the description. It is assumed that this processing flow is started when traveling in the first traveling mode described above. In S301, the control system2acquires information on the surrounding environment. Here, the information on the surrounding environment may be acquired based on, for example, map information. Alternatively, an image of a sign in the surrounding environment may be acquired by a camera provided in the vehicle1, and the information on the surrounding environment may be acquired by analyzing the image. Alternatively, data regarding the upper limit speed may be acquired from an external device via the communication device24c. Note that a method of acquiring information on the surrounding environment is not particularly limited, and a plurality of methods may be combined, or switching may be performed in accordance with the travel environment (weather, travel place, or the like). In addition, information (vehicle speed or the like) set by the driver may be referred to and used. For example, it is possible to combine a vehicle speed (set vehicle speed) set by the driver with a vehicle speed acquired from a sign of the surrounding environment to be treated as the surrounding information regarding the travel environment. The value of the set vehicle speed that can be set by the driver may vary in accordance with the traveling mode or the like. In S302, the control system2sets the upper limit speed based on the surrounding environment during traveling. Specifically, the upper limit speed as illustrated inFIG.2is determined and set based on the information on the surrounding environment acquired in S301. Here, it is assumed that one of the upper limit speeds in three stages illustrated inFIG.2is set. For example, in a case where the upper limit of the traveling speed defined for the road on which the vehicle is traveling is 80 kph, the highest speed (80 kph in the example ofFIG.2) among the upper limit speeds equal to or lower than the upper limit speed is set. Alternatively, in a case where the lower limit of the traveling speed is defined for the road, the upper limit speed may be set so as not to fall below the lower limit. In addition, as described above, in a case where the set vehicle speed by the driver is used, the value of the set vehicle speed may be used in preference to the value of the vehicle speed based on the sign. For example, in a case where the set vehicle speed is higher than the vehicle speed indicated by the sign, a high upper limit speed may be set in accordance with the set vehicle speed. In addition, in a case where the set vehicle speed is smaller than the vehicle speed indicated by the sign, a low upper limit speed may be set in accordance with the set vehicle speed. In S303, the control system2executes the traveling assistance control in the first traveling mode based on the upper limit speed set in S302. The content of the traveling assistance control here is not particularly limited, and examples thereof include the vehicle speed keeping control, the lane-keeping control, and the preceding vehicle following control as described above. In S304, the control system2determines whether or not an event requiring mode transition has occurred. In this example, it is determined whether or not an event requiring transition from the first traveling mode to the second traveling mode has occurred. The event here is not particularly limited, and examples thereof include a case where the end of the area where the vehicle can travel in the first traveling mode is approaching, a case where it is getting difficult to continue the first traveling mode due to a change in the surrounding environment. In a case where it is determined that the event requiring the mode transition has occurred (YES in S304), the process proceeds to S305. In a case where it is determined that the event does not occur (NO in S304), the process returns to S301, and the processing is repeated. In S305, the control system2acquires a predetermined speed (vehicle speed at which starting to take over driving) corresponding to the upper limit speed. In this example, it is assumed that the information illustrated inFIG.2is held in the storage unit and information on the vehicle speed at which starting to take over driving corresponding to the upper limit speed can be acquired by referring to this information. In S306, the control system2executes the traveling speed control of the self-vehicle with the predetermined speed acquired in S305as the upper limit. As the traveling assistance control here, in addition to acceleration/deceleration of the traveling speed, control of the travel position in the lane and control of adjusting the inter-vehicle distance with the surrounding vehicle may be performed. In addition, the degree of acceleration/deceleration of the traveling speed may change in accordance with the surrounding environment or the urgency of the event requiring the mode transition detected in S304. In S307, the control system2determines whether or not the traveling speed of the self-vehicle has reached the predetermined speed acquired in S305. In a case where it is determined that the traveling speed of the self-vehicle has reached the predetermined speed (YES in S307), the process proceeds to S308. In a case where it is determined that the traveling speed has not reached the predetermined speed (NO in S307), the process returns to S306, and the processing is repeated. In S308, the control system2requests the driver to take over driving according to the transition of the traveling mode. The notification intensity related to the request here is performed with the intensity illustrated inFIG.2. In addition, regarding the request method, it may be displayed on a display or may be reported by sound, for example. In addition, the content of the requested operation may be reported. Here, examples of the notification contents include contents for prompting monitoring of surroundings, gripping of the steering, and the like. In S309, the control system2determines whether or not the driver has performed a predetermined operation in response to the request for taking over driving in S308. For example, in a case where it is to grip the steering due to the mode transition, the determination may be made based on detection results by various sensors provided in the steering. In a case where it is necessary to monitor the surroundings, the direction of the face, the direction of the line of sight, or the like of the driver may be determined based on a result of detection by various sensors provided in the vehicle. In a case where the operation of taking over driving has been detected (YES in S309), the process proceeds to S310. In a case where the operation of taking over driving has not been detected (NO in S309), the process returns to S308and the processing is repeated. In a case where the process in S308is performed again, the manner of request (notification intensity) may be changed from the previous notification method. For example, in a case where the notification intensity is “medium”, the notification intensity may be changed to “strong”, and the notification intensity may be increased so as to increase the volume of the sound or to additionally perform a predetermined display. In S310, the control system2causes the traveling mode transition. In this example, a transition is made from the first traveling mode to the second traveling mode. In this case, the driver may be notified of the mode transition each before and after the transition. Then, the processing flow ends. As described above, in the present embodiment, by controlling the timing of notification related to taking over driving in accordance with the travel environment, it is possible to appropriately perform notification at the time of transition of the traveling state, and it is possible to perform vehicle control with high user convenience. Second Embodiment In the first embodiment, the control of the timing of notification at the time of transition from the first traveling mode to the second traveling mode has been described. In the present embodiment, an embodiment in which an elapsed time from the transition to the first traveling mode is taken into consideration will be further described. Note that the description of configurations overlapping with the first embodiment will be omitted, and only differences will be described. FIG.4is a diagram for explaining a correspondence relationship between an upper limit speed and a timing of notification according to the present embodiment.FIG.4includes information on elapsed time in addition to the configuration ofFIG.2described in the first embodiment. The elapsed time according to the present embodiment indicates an elapsed time from the transition to the first traveling mode. In the case of the example ofFIG.4, in a case where the upper limit speed is set to 80 kph, the timing of notification related to the request for taking over driving is switched in accordance with whether or not a time equal to or longer than a predetermined threshold (here, 30 minutes) has elapsed since the transition to the first traveling mode. Here, in a case where 30 minutes or more have elapsed since the transition to the first traveling mode, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 65 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 15 kph. In addition, the intensity related to the notification at this time is set to “strong”. In addition, in a case where less than 30 minutes have elapsed since the transition to the first traveling mode, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 70 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 10 kph. In addition, the intensity related to the notification at this time is set to “medium”. That is, in a case where a time being longer has elapsed since the transition to the first traveling mode, the timing of notification related to the request for taking over driving is controlled such that the difference between the upper limit speed and the traveling speed of the self-vehicle increases. Then, based on this, the processing flow ofFIG.3described in the first embodiment is performed. Note that, inFIG.4, speed 80 kph has been described as an example of the upper limit speed, but a similar threshold may be provided for another upper limit speed. In addition, the threshold indicated by the elapsed time is not limited to one only, and a plurality of thresholds may be used. In addition, different thresholds may be used in accordance with the upper limit speed. As described above, according to the present embodiment, in addition to the effects of the first embodiment, it is possible to control the timing of notification in accordance with the traveling state and the duration. Third Embodiment In the present embodiment, a description will be given assuming that the first traveling mode is a traveling mode that can be continued when following a preceding vehicle in a lane in which the self-vehicle is traveling. Note that the description of configurations overlapping with the first embodiment will be omitted, and only differences will be described. FIG.5is a diagram for explaining a correspondence relationship between an upper limit speed and a timing of notification according to the present embodiment.FIG.5includes information on acceleration in addition to the configuration ofFIG.2described in the first embodiment. The acceleration according to the present embodiment indicates the acceleration of the preceding vehicle being followed in the first traveling mode. In the case of the example ofFIG.5, in a case where the upper limit speed is set to 80 kph, the timing of notification related to the request for taking over driving is switched in accordance with whether or not the acceleration of the preceding vehicle is a predetermined threshold (in this example, 2 m/s2) or more. The acceleration of the preceding vehicle may be calculated from the traveling speed, the inter-vehicle distance, and the like of the self-vehicle, and the calculation method thereof is not particularly limited. Here, in a case where the acceleration of the preceding vehicle being followed is 2 m/s2or more, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 65 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 15 kph. In addition, the intensity related to the notification at this time is set to “strong”. In addition, in a case where the acceleration of the preceding vehicle being followed is less than 2 m/s2, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 70 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 10 kph. In addition, the intensity related to the notification at this time is set to “medium”. That is, in a case where the acceleration of the preceding vehicle being followed in the first traveling mode is higher, the timing of notification related to the request for taking over driving is controlled such that the difference between the upper limit speed and the traveling speed of the self-vehicle increases. Then, based on this, the processing flow ofFIG.3described in the first embodiment is performed. With such control, for example, even in a case where the preceding vehicle suddenly accelerates, the timing of notification can be switched in accordance with the traveling state. Note that, inFIG.5, speed 80 kph has been described as an example of the upper limit speed, but a similar threshold may be provided for another upper limit speed. In addition, the threshold indicated by the acceleration is not limited to one only, and a plurality of thresholds may be used. In addition, different thresholds may be used in accordance with the upper limit speed. In addition, in the above example, the acceleration of the preceding vehicle has been described as an example, but the acceleration may be based on the acceleration of the self-vehicle accompanying the following-up. As described above, according to the present embodiment, in addition to the effects of the first embodiment, it is possible to control the timing of notification in accordance with the acceleration during follow-up traveling. Fourth Embodiment In the first embodiment, the control at the time of transition from the first traveling mode to the second traveling mode has been described. In the present embodiment, an embodiment in consideration of an action executed by the driver during the first traveling mode will be further described. Note that the description of configurations overlapping with the first embodiment will be omitted, and only differences will be described. As described above, in the first traveling mode, an operation (gripping the steering, monitoring surroundings, or the like) by the driver related to the travel control is not required. Therefore, in the first traveling mode, the driver can execute an action other than the operation related to traveling. Examples of the action other than the operation related to traveling include having a conversation while facing a passenger, operating various devices such as a smartphone or the like, having a nap, leaving a seat (including a state where a seat belt is removed), having a meal, and the like. Here, such actions are collectively referred to as “external tasks”. Note that, in the present embodiment, the content of the external task performed by the driver can be determined based on a detection result of a detection unit (not illustrated in the drawings) such as an in-vehicle camera or the like provided in the vehicle1. FIG.6is a diagram for explaining a correspondence relationship between an upper limit speed and a timing of notification according to the present embodiment.FIG.6includes information on an executed external task in addition to the configuration ofFIG.2described in the first embodiment. The executed external task according to the present embodiment indicates the contents of the external task being executed by the driver in the first traveling mode. In the case of the example ofFIG.6, in a case where the upper limit speed is set to 80 kph, the timing of notification of taking over driving is switched in accordance with the action performed by the driver. Here, in a case where the driver is away from the seat in the middle of the first traveling mode, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 65 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 15 kph. In addition, the intensity related to the notification at this time is set to “strong”. In addition, in a case where the driver is taking a nap (in a state in which eyes are closed) in the middle of the first traveling mode, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 70 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 10 kph. In addition, the intensity related to the notification at this time is set to “strong”. In addition, in a case where the driver operates a device such as a smartphone or the like in the middle of the first traveling mode, the timing of notification related to the request for taking over driving is a timing when the traveling speed of the self-vehicle reaches 75 kph, and the difference between the upper limit speed and the traveling speed of the self-vehicle, in this case, is 5 kph. In addition, the intensity related to the notification at this time is set to “medium”. That is, in the middle of the first traveling mode, the timing of notification related to the request for taking over driving is controlled such that the difference between the upper limit speed and the traveling speed of the self-vehicle increases as the external task that is assumed to require a longer period of time required for taking over driving is performed based on the content of the external task being executed by the driver. Then, based on this, the processing flow ofFIG.3described in the first embodiment is performed. Note that, in the example ofFIG.6, the timing of notification of taking over driving is associated with the external task on a one-to-one basis, but the present invention is not limited thereto. For example, it may be configured such that a finer state of the driver is recognized and the timing of notification of taking over driving is controlled in accordance with the state (degree). In addition, inFIG.6, speed 80 kph has been described as an example of the upper limit speed, but a similar external task may be associated with another upper limit speed. In addition, the contents to be reported may be made different in accordance with the type and contents of the external task being executed by the driver. As described above, according to the present embodiment, in addition to the effects of the first embodiment, the timing of notification can be controlled in accordance with the external task of the driver executed in the first traveling mode. Other Embodiments In the embodiments described above, an example in which the upper limit speed can be set in three stages has been described. However, the present invention is not limited thereto, and the number of stages may be set to a larger number. Alternatively, the upper limit speed may be set not by a discrete value but by a consecutive value. In this case, for example, a graph indicating a correspondence relationship between the upper limit speed and the timing of notification may be defined, and control may be performed based on this information. In addition, in the second embodiment described above, the timing of notification is controlled based on the acceleration during the following-up control. As another mode, in a case where the vehicle is traveling alone, the timing of notification may be controlled based on a change in acceleration in accordance with the surrounding situation. In addition, the control of the timing of notification in each of the embodiments described above is not mutually exclusive, and the control of each of the embodiments may be performed in combination. In addition, even in one traveling mode, different upper limit speeds may be settable in accordance with the traveling situation. Summary of Embodiments Embodiment 1 A control system (2, for example) for a vehicle (1, for example) capable of traveling in a first traveling state and a second traveling state, the second traveling state requiring an operation related to traveling by a passenger more than the first traveling state, the control system comprising:a change unit (2, for example) configured to change, based on a traveling situation of the vehicle, an upper limit speed at which the vehicle can travel in the first traveling state;a notification unit (2, for example) configured to perform notification to the passenger of a request for a predetermined operation at a timing when a traveling speed of the vehicle reaches a predetermined speed when switching from the first traveling state to the second traveling state; anda control unit (2, for example) configured to switch to the second traveling state when detecting the predetermined operation by the passenger, whereinthe notification unit performs control such that a difference between the upper limit speed and the predetermined speed increases in a case where the upper limit speed is high compared to a case where the upper limit speed is low. According to this embodiment, it is possible to provide vehicle control with high passenger convenience by performing appropriate notification at the time of transition of the traveling state. Embodiment 2 The control system according to Embodiment 1, further comprising:a determination unit (2, for example) configured to determine whether or not to perform switching from the first traveling state to the second traveling state based on a traveling situation of the vehicle; anda vehicle speed control unit (2, for example) configured to control a traveling speed of the vehicle to be brought close to the predetermined speed in a case where the determination unit determines to perform switching to the second traveling state. According to this embodiment, it is possible to shift to an appropriate traveling speed at the time of transition of the traveling state. Embodiment 3 The control system according to Embodiment 1, wherein the change unit increases the upper limit speed in at least one of a case where a shape of a road on which the vehicle travels is smaller than a predetermined turning curvature, a case where accuracy of a detection result of a surrounding environment detected by a detection unit included in the vehicle is higher than a threshold, and a case where traveling of the vehicle is stably continued, as the traveling situation of the vehicle. According to this embodiment, it is possible to travel at a high vehicle speed in a traveling state with a high automation rate in accordance with the traveling situation of the vehicle. Embodiment 4 The control system according to Embodiment 1, wherein the notification unit increases intensity of the notification in a case where the upper limit speed is high compared to a case where the upper limit speed is low. According to this embodiment, when the vehicle is traveling at a high vehicle speed, it is possible to prompt the passenger to quickly take over driving at the time of transition of the traveling state. Embodiment 5 The control system according to Embodiment 1, wherein the notification unit performs control to increase a difference between the upper limit speed and the predetermined speed in a case where a duration of the first traveling state is long compared to a case where the duration is short. According to this embodiment, even in a case where the passenger has not performed the operation related to the travel control for a long time, it is possible to provide sufficient time in taking over driving, and it is possible to smoothly hand over the operation. Embodiment 6 The control system according to Embodiment 1, wherein,in the first traveling state, traveling following up a preceding vehicle is performed, andthe notification unit performs control to increase a difference between the upper limit speed and the predetermined speed in a case where an acceleration of the preceding vehicle or the vehicle is high compared to a case where the acceleration is low. According to this embodiment, even in a case where the travel environment changes due to sudden acceleration of the preceding vehicle that the self-vehicle is following, it is possible to appropriately report to take over driving and smoothly hand over the operation. Embodiment 7 The control system according to Embodiment 1, further comprisinga detection unit configured to detect an action performed by the passenger of the vehicle, whereina difference between the upper limit speed and the predetermined speed is determined in accordance with contents of the action detected by the detection unit. According to this embodiment, it is possible to smoothly hand over an operation by appropriately taking over driving in accordance with the action being performed by the passenger. Embodiment 8 The control system according to Embodiment 7, wherein the notification unit performs different notifications in accordance with the contents of the action detected by the detection unit. According to this embodiment, it is possible to smoothly hand over an operation by making a notification related to taking over driving with appropriate content in accordance with the action being performed by the passenger. Embodiment 9 A method for controlling a vehicle (1, for example) capable of traveling in a first traveling state and a second traveling state, the second traveling state requiring an operation related to traveling by a passenger more than the first traveling state, the method comprising:changing, based on a traveling situation of the vehicle, an upper limit speed at which the vehicle can travel in the first traveling state;performing notification to the passenger of a request for a predetermined operation at a timing when a traveling speed of the vehicle reaches a predetermined speed when switching from the first traveling state to the second traveling state; andswitching to the second traveling state when detecting the predetermined operation by the passenger, whereina difference between the upper limit speed and the predetermined speed is increased in a case where the upper limit speed is high compared to a case where the upper limit speed is low. According to this embodiment, it is possible to provide vehicle control with high passenger convenience by performing appropriate notification at the time of transition of the traveling state. Embodiment 10 A non-transitory storage medium comprising instructions that, when executed by one or more processors of a vehicle (1, for example) capable of traveling in a first traveling state and a second traveling state, the second traveling state requiring an operation related to traveling by a passenger more than the first traveling state, cause the one or more processors to:change, based on a traveling situation of the vehicle, an upper limit speed at which the vehicle can travel in the first traveling state;perform notification to the passenger of a request for a predetermined operation at a timing when a traveling speed of the vehicle reaches a predetermined speed when switching from the first traveling state to the second traveling state; andswitch to the second traveling state when detecting the predetermined operation by the passenger, whereina difference between the upper limit speed and the predetermined speed is increased in a case where the upper limit speed is high compared to a case where the upper limit speed is low. According to this embodiment, it is possible to provide vehicle control with high passenger convenience by performing appropriate notification at the time of transition of the traveling state. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. | 51,261 |
11858528 | Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. MODE FOR INVENTION The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known in the art may be omitted for increased clarity and conciseness. The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application. Also, it is to be understood that the various examples herein, although described through different illustrations, are not mutually exclusive. For example, structures, shapes, and sizes described with respect to such examples may be implemented in any of the other examples without departing from the spirit and scope of the present disclosure. Further, examples include those with various modifications of positions or arrangements of elements without departing from the spirit and scope of the present disclosure. Additionally, although terms of “first” or “second” may be used to explain various components, the components are not limited to the terms. These terms should be used only to distinguish one component from another component. For example, a “first” component may be referred to as a “second” component, or similarly, and the “second” component may be referred to as the “first” component within the scope of the right according to the concept of the present disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. For example, as used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms “include”, “comprise”, and/or “have,” when used in this specification, specify the presence of stated features, integers, operations, elements, components or a combination/group thereof in an example embodiment, but do not preclude the presence or addition of one or more other features, integers, operations, elements, components, and/or combinations/groups thereof in alternative embodiments, nor the lack of such stated features, integers, operations, elements, components, and/or combinations/groups in further alternative embodiments unless the context and understanding of the present disclosure indicates otherwise. In addition, the use of the term ‘may’ herein with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists where such a feature is included or implemented while all examples and embodiments are not limited thereto. Unless otherwise defined herein, all terms used herein including technical or scientific terms have the same meanings as those generally understood by one of ordinary skill in the art consistent with and after an understanding of the present disclosure. Terms defined in dictionaries generally used should be construed to have meanings matching with contextual meanings in the related art and the present disclosure and are not to be construed as an ideal or excessively formal meaning unless otherwise defined herein. FIG.1is a block diagram illustrating a driving support system according to one or more embodiments. Referring toFIG.1, a driving support system may include a camera unit110, a vehicle control unit120, a telematics control unit130, a communications port140, and a driving support terminal200. The camera unit110, the vehicle control unit120, the telematics control unit130, and the communications port140may be components in a vehicle100, such as those installed during a manufacturing of the vehicle100. The driving support terminal200may be a hardware component selectively connected to the vehicle thereafter, for example, which may provide connectivity, or connectivity and physical support for additional devices, such as connectivity and/or physical support for the below discussed mobile device300example, which thereby may be in communication with the driving support terminal200and/or the vehicle control unit120, for example. Here, the vehicle control unit120, the telemetric control unit130, the communications port140, and driving support terminal200are each hardware devices or components, e.g., where each of the devices or components may be implemented as hardware only, hardware (e.g., one or more processors) that is configured through execution of computing-based instructions, or a combination of hardware and such hardware configured through execution of computing-based instructions, e.g., where, as explained further below, such instructions may be instructions which, when executed by such one or more processors, configure the one or more processors to implement any one, any combination, or all operations or methods of the respective devices or components described herein. In addition, though the devices or components are separately identified, examples exist where their respective functions are collectively implemented by a single device or component or where the respective functions are variously combined in two or more alternate devices or components in any combination. The camera unit110may be mounted on the vehicle and may image an external region surrounding the vehicle. For example, the camera unit110may include one or more cameras and may generate an image of the periphery of the vehicle. The camera unit110may include four cameras, for example, and may image a front region, side regions, and a rear region of the vehicle and may generate an image of the periphery of the vehicle. The camera unit110may provide the generated image to the communications port140. Additionally, the camera unit110may also be representative of one or more cameras and one or more processors that may be configured to perform pre-processing on captured images, such as to reduce noise, normalize the captured images, or otherwise perform image processing on the captured image or image information for availability from or transmission to the communications port140or the vehicle control unit120, for example, as well as in examples for image processing of captured image or image information into a form or format for input to one or more artificial intelligence (AI) components or processor implemented models of the driving support terminal200(and/or mobile device300ofFIGS.4-6), such as a neuromorphic processing unit/processor included in the driving support terminal200(and/or mobile device300), for example, or where the one or more processors, e.g., a CPU and/or GPU, represented by the driving support terminal200(or processor represented by the mobile device300) are configured to implement such artificial intelligence models, such as a trained neural network system that may be trained to extract features or recognize objects or targets from provided image information, such as through a series of convolutional layers followed by a series of feed forward layers, as non-limiting examples, and which may further be configured to generate control signal outputs based on such extracted features, e.g., through further trained layers of the neural network system. In other examples, such as discussed below with respect toFIGS.2-3one or more of such image pre-processing operations may also or alternatively be performed by a data processing module150. Thus, the vehicle control unit120may include the electronic control unit121and a body control module122, and may control overall operations of the vehicle. The electronic control unit121may generate driving information including travelling information and operating information of the vehicle, and may control an engine of the vehicle in accordance with the generated travelling information and operating information. The vehicle may include one or more sensors sensing a travelling state and an operating state of the vehicle, such as speed sensors, steering sensors, braking sensors, acceleration sensors, roll sensors, various driving component positional or value indicating sensors, as well as a temperature sensor example that may generate temperature information considered by the electronic control unit121to adjust acceleration and/or braking in near or below freezing conditions, or water or humidity sensor examples that generate corresponding information considered by the electronic control unit121to similarly adjust acceleration and/or braking in determined damp or wet road environments. The electronic control unit121may generate the travelling information and the operating information of the vehicle from such example sensed values outputs of the one or more sensors. The electronic control unit121may include a plurality of managing modules, i.e., processors or otherwise computing hardware, and may manage the sensed values output from the same types of sensors through an example single managing module. For example, the plurality of cameras of the camera unit110may be sensors imaging the periphery of the vehicle, and in this case, the electronic control unit121may manage images of the plurality of cameras of the camera unit110through a first managing module, while information or signals from environmental sensors such as the temperature or humidity sensors may be managed by a second managing module, where each managing module is configured to process the respectively received sensor information, and as discussed above perform various processing on the sensor data for provision to various other components of the vehicle, such as to an instrument cluster for mere informative notification to a driver and/or to the aforementioned artificial intelligence components and/or models implemented by the electronic control unit121, for example. The electronic control unit121may make available or provide the generated travelling information and the operating information to the communications port140, as well as other information regarding control signaling that the electronic control unit121may be performing or that the electronic control unit121is configured to perform for the control of the vehicle, including any corresponding control signaling of/for information, driving assisting information and control, and/or other autonomous driving control, as non-limiting examples. Thus, the electronic control unit121may make available or provide (and/or be requested by the driving support terminal200and/or the mobile device300) information of the current informative, assistive, and/or autonomous features or functionalities of the driving program included in the vehicle100, e.g., as originally embedded in the electronic control unit121at the time of manufacture, for comparison, such as by the driving support terminal200and/or the mobile device300, with those informative, assistive, and/or autonomous features or functionalities of the driving program included in the vehicle100, and either or both of the driving support terminal200and/or the mobile device300may determine whether the supplementation to, or superseding of, such existing driving programming of the electronic control unit121, e.g., with driving program(s) implemented by the driving support terminal200and/or the mobile device300, or select functionalities thereof. The program information or other information may be similarly made available or provided to the communications port140with positional or alignment registration information informing of the respective locations and/or configurations or properties of the cameras or camera systems of the camera unit110, and/or their respective fields of view (FOV), for consideration by a driving program of the driving support terminal200and/or the mobile device300for properly registering the received image information for both informative, assistive, and autonomous driving functionalities, e.g., as different vehicles have different relative positions, heights, FOVs of their cameras that are installed during vehicle manufacture, for example. The driving support terminal200and/or the mobile device300may also be configured to communicate with a remote server to obtain such various vehicle specific information for proper expectation and use of the information provided by the respective cameras of camera unit110of the vehicle100. The driving support terminal200and/or the mobile device300may also take into consideration additional variables, such as sensed tire pressure or user indicated non-OEM tire make and model and size, or other post-manufacture suspension modifications or other change information entered by the user or detectable by the driving support terminal200and/or the mobile device300using information from the camera unit110, for example. In an example that includes the mobile device300and the driving support terminal200, the driving support terminal200and/or the mobile device300may also have a predetermined relative positional relationship, e.g., based on the physical supporting configuration of the driving support terminal200and the positioning of the mobile device300in contact with the driving support terminal200. Here, the user may enter in a user interface of the driving support terminal200and/or the mobile device300the manufacturer and model of the mobile device300, or the driving support terminal200may merely request the same from the mobile device300or the mobile device300may similarly self-identify the same, for registering the location of the one or more cameras of the mobile device300as additional cameras for consideration by the driving program of the driving support terminal200and/or the mobile device300. Information of any case or skin of the mobile device300may also be determined or similarly entered by the user. In such an example, the driving support terminal200may determine its position relative to the vehicle100based on information received from the one or more cameras of the mobile device300and information from the camera unit110and/or the user may enter such positioning information to the user interface of the driving support terminal200and/or the mobile device300, for registration of the positions of the cameras of the mobile device300(or any alternative or additional supplemented cameras) relative to the vehicle100and to the positions of the other cameras of the camera unit110, as non-limiting examples. The communications port140may include an on-board diagnostics (OBD) port. The OBD port may be arranged in or near a consol of the vehicle and may be accessible by a passenger or driver for connection to external devices, such as typical code readers used by a mechanic to diagnose mechanical or electrical failures or problems in various components of the vehicle. The communications port140may communicate with such an external device connected to the communications port140and components of the vehicle through controller area network (CAN) communication, or may provide a network interface between such an external device connected to the communication port140and the corresponding CAN network connecting the components of the vehicle as illustrated, so the external device may receive and/or transmit information directly from/to such components using the CAN network. In an example, the CAN network may provide multiple serial communications using a same communication channel, such that multiple wirings or communication channels that were previously required between components in vehicles to provide or transmit different information may no longer be necessary as the different information can be provided or made available using the example same communication channel. As an example, the communications port140may connect with all of the camera unit110, the electronic control unit121, and the telematics control unit130through the CAN network. The body control module122may control elements included in the body of the vehicle. For example, the body control module122may control wipers, lights, power seats, a sunroof, an air conditioning system, and the like. A telematics control unit130may include a wireless communication module, and may perform wireless communication with one or more external devices present externally of the vehicle. For example, the telematics control unit130may be configured to perform at least one of cellular communications, Wi-Fi communications, Bluetooth communications, or other local, near, or far field communications, as non-limiting example. The telematics control unit130may receive external environment information of the vehicle through cellular communication, and may provide the received external environment information to the driving support terminal200through Wi-Fi communications, or the driving support terminal200(or mobile device300in examples) may request and receive such received external environment information. The telematics control unit130may also be configured to provide an alternate communication channel with the communications port140, e.g., alternate to the OBD port, so the driving support terminal200(and/or the mobile device300) can interact with the electronics control unit121or other components of the vehicle through the CAN network. Accordingly, the driving support terminal200may be connected to the communications port140and may obtain the travelling information and the operating information of the vehicle generated in the electronic control unit121, and images captured by the camera unit110or the aforementioned pre-processing image information results of such captured images. The driving support terminal200may also act as a communication intermediary or translation/conversion between the aforementioned mobile device300and the communications port140. The driving support terminal200may include a first connection terminal (Type_OBD) configured for mating with the communications port140. As an example, the first connection terminal (Type_OBD) may include an OBD-type connection terminal. The driving support terminal200may also, or alternatively, receive external environment information from the telematics control unit130, as well as access to the CAN network through the communication channel between the telematics control unit130and the communications port140. For example, the driving support terminal200and the telematics control unit130may receive data from and transmit data to each other through respective Wi-Fi communications modules. Additionally, the driving support terminal200may also include a communication module, e.g., providing cellular or other wide network radio communication, and request external environment information from a remote server, or request the same from the example mobile device300in the below examples, where the mobile device300includes such a communications module. Herein, the term module refers to a hardware component. As still another example, the functions of the telematics control unit130may be implemented by corresponding hardware of the communications port140, and driving support terminal200may directly connect with the communications port140through such WiFi communications. In examples herein, the driving support terminal200may be connected to the communications port140and may obtain the travelling information and the operating information of the vehicle, and/or may obtain the image information produced in the camera unit110or corresponding pre-processed image information, noting that examples are not limited thereto. In any of the various examples herein, a Wi-Fi communications module(s) may be provided in the camera unit110, e.g., in a corresponding managing module or within each camera, and the communications port140, and in this case, the driving support terminal200may perform wireless communication with the camera unit110and the communications port140and may obtain the travelling information and the operating information of the vehicle, and such various image information. As noted above, the camera unit110is representative of one or more cameras and/or camera systems, noting that the driving support system may further include in some examples one or more cameras of the aforementioned example mobile device300that may additionally be supported by, selectively mounted to, or otherwise mounted in or on the vehicle100and wiredly connected to the driving support terminal200or the mobile device300or wirelessly connected to the driving support terminal200and/or the mobile device300, as non-limiting examples. For example, such an additional camera(s) may be connected to the driving support terminal200or the mobile device300using a USB wire and connector of the driving support terminal200or the mobile device300, or through a proprietary connector of the mobile device300, as non-limiting examples. The driving support terminal200is configured to store at least one driving program, which in various examples includes driving programs respectively providing different levels of information, driving assistance and/or autonomy and/or programs providing different information, assistance, and/or autonomous functionalities within a same level of autonomy, and may variously consider the various available image information obtained and/or derived from the camera unit110, the corresponding managing module, the electronic control unit121, or otherwise available through the CAN network, e.g., accessed through the communications port140, as well as other aforementioned example sensor information, the travelling information and the operating information of the vehicle obtained from the CAN network, e.g., through the communications port140, and the aforementioned external environment information that may be obtained from the telematics control unit130, to generate various driving support information and/or driving control signaling for an example level of autonomous driving control. The driving support terminal200may thus provide/transmit such information and/or driving control signaling to the communications port140, which may interface with the CAN network to forward the respective different information to a center information display unit of the vehicle, the instrument cluster, and/or through sound generators for audible indication to provide the information to the driver, and may similarly interface with the CAN network to forward the respective driving control signaling to the electronic control unit121or the corresponding controlled components, such by sending braking control signaling to an electronic brake controller to activate (or reduce application of) the brakes with a controlled intensity, sending accelerator control signaling to an electronic accelerator controller to control the accelerator or otherwise control the throttle, and sending steering control signaling to an electronic steering controller or an electric power steering (EPS) to control extent and or speed of changes to the steering or the maintenance of a current steering control. In electric vehicle examples, the driving control signaling may be communicated to separate controllers of the different wheel motors, brakes, as well as steering systems. Also, in such electric vehicle examples, corresponding stored driving programs may perform such informative, assistive, and/or other autonomous control functionalities based on additional information from one or more battery modules, for example to maximize battery life or charge of the one or more battery modules. Alternatively, or additionally, the below discussed mobile device300may similarly store such a driving program and generate and provide/transmit such information and/or driving control signaling to the communication port140, or the telematics control unit130, directly or by using the driving support terminal200as an intermediary or translator with the vehicle, e.g., where the mobile device300either provides/transmits coding of instructions to the driving support terminal200to generate the corresponding information and/or control signaling or the mobile device300generates and provides/transmits such information and/or control signaling to the driving support terminal200that provides/transmits the same to the communication port140. In the example where the mobile device300provides such coding of the instructions for the driving support terminal200to generate the information and/or control signaling, the driving support terminal200receives such coding of the instructions, e.g., in the communication protocol between the driving support terminal200and the mobile device300, and generates the information and/or control signaling in the form at format compatible with the CAN communications protocol of the CAN network, and provides/transmits the same to the communication port140, for example. The mobile device300may alternatively forward generate and forward the information and/or control signaling in the communication protocol between the driving support terminal200and the mobile device, and the driving support terminal200merely converts the information and/or the control signaling into the form and format compatible with the CAN network, and provides/transmits the same to the communication port140, as non-limiting examples. Accordingly, the driving support terminal200may provide various advanced driving support functionalities to a vehicle, e.g., depending on the driving program stored in the driving support terminal200(and/or stored in the mobile device300), thereby providing an advanced driver assistance system (ADAS), for example. The driving support terminal200in any of the examples herein may provide such advanced driving support functionality of a predetermined level prescribed in an ADAS standard. For example, according to the defined levels 0 through 5 of SAE J3016 from SAE International, initially established as the Society of Automotive Engineers, during SAE levels 0 through 2 the driver is driving whenever a number of driver support features are implemented or engaged, even if the driver's feet are off the pedals and the driver is not steering, as the driver must constantly supervise the support features, and must steer, brake, or accelerate as needed to maintain safety. Rather, in SAE levels 3-5 the driver is not driving when the driver support features are implemented or engaged, even if the driver is seated in ‘the driver's seat’, where in SAE level 3 the driver must drive when the vehicle or driver support feature requests, while in SAE levels 4 and 5 the automated driving features will not require the driver to take over driving. Beginning with SAE level 0, example driver support features that may be engaged include automatic emergency braking, blind spot warning, and lane departure warning, as non-limiting examples, and these features are limited to providing warnings and momentary assistance. With SAE level 1, the driver support features that may be engaged may further include lane centering or adaptive cruise control, where these features provide steering or brake/acceleration support to the driver, compared to SAE level 2 where the automated driving features further include both lane centering and adaptive cruise control at the same time, and where these features provide steering and brake/acceleration support to the driver. Thus, with SAE levels 0-2, the vehicles provide driver support features, compared with SAE levels 3-5 where the vehicles are considered as providing automated driving features. For example, with SAE level 3, example automated driving features may include traffic jam chauffeur, as a non-limiting example, and automated driving features of SAE level 4 include local driverless taxi, again as a non-limiting example, and even example vehicles where pedals/steering wheel may or may not be installed. With SAE levels 3 and 4, the automated driving features can drive the vehicle under limited conditions and will not operate unless all required conditions are met. The automated driving features of SAE level 5 may include all the automated driving features of SAE level 4, but the automated driving features can drive everywhere in all conditions. Another way to delineate such SAE levels are with respective names of the levels, such as SAE level 0 being referred to as ‘no automation’, SAE level 1 being referred to as ‘driver assistance’, SAE level 2 being referred to as ‘partial automation’, SAE level 3 being referred to as ‘conditional automation’, SAE level 4 being referred to as ‘high automation’, and SAE level 5 being referred to as ‘full automation’. While the above example ADAS standard example was SAE J3016 from SAE, examples are not limited thereto. However, for simplicity of explanation below, such various available ADAS standards will collectively simply be referred to as the ADAS standard or an ADAS standard. Accordingly, the driving support terminal200may be configured to provide advanced driving support functionalities depending on the driving program stored in the driving support terminal200(or mobile device300in the examples ofFIGS.4-6). Additionally, in different examples the driving support terminal200(and/or mobile device300) may have different memory and processing capabilities, with one such example being where the driving support terminal200is pre-limited to implement or execute a driving program that provides an advanced driving support functionality corresponding to the second or lower ADAS levels, where a processing module of the driving support terminal200may include a high bandwidth memory (HBM) of 8 GB/512 bit, and may have a size of 65×65 mm, a processing speed of 10 peta floating point operations per second (10TFLOPS), and a power consumption level of 4 watts or less. As noted, this example driving support terminal200(and/or mobile device300) may be pre-limited to driving programs that provide advanced driving support functionalities of levels 2 and below, i.e., not greater than level 2, which may include a forward collusion warning (FCW) function, a lane departure warning (LDW) function, a lane keeping assist (LKA) function, a blind spot warning (BSW) function, an adaptive cruise control (ACC) function, a travelling road recognizing function, a traffic signal recognizing function, and the like, as non-limiting examples. In another example, the driving support terminal200(and/or mobile device300) may have memory and processing capabilities corresponding to pre-limitation of the driving support terminal200(and/or mobile device300) to implement or execute driving programs that provide an advanced driving support functionality of the third level or higher, where a processing module of the driving support terminal200may include a high bandwidth memory (HBM) of 32 GB/1024 bit, and may have a size of 180×120 mm, a processing speed of 100 peta floating point operations per second (10TFLOPS), and a power consumption level of 6.6 watts or less. As noted, this example driving support terminal200(and/or mobile device300) may be pre-limited to driving programs that provide advanced driving support functionalities of the third level or higher prescribed in the ADAS standard, which may include an occluded object prediction function, a lane changing function, a pedestrian recognizing function, a road change recognizing function, a road sign recognizing function, and the like. Alternatively, in an example, with this example memory and processing configuration or an example memory and processing configuration with greater memory and processing capabilities, the driving support terminal200(and/or mobile device300) may not be pre-limited as to which driving program can be executed or implemented, and thus may implement any driving program corresponding to any of the zeroth through fifth levels of the ADAS standard. Accordingly, the driving support terminal200may include a driving program, which may include a driving assistance program and/or an autonomous driving program. As an example, in the case in which the driving support terminal200(or mobile device300) is pre-limited, e.g., due to the above example lower memory and processing capabilities, to providing advanced driving support functionalities of the second level or lower of the ADAS standard, the driving support terminal200(and/or mobile device300) may include only a driving assistance program. Alternatively, in the case in which the driving support terminal200has the greater memory and processing functionalities is pre-limited to providing advanced driving support functionalities of the third level or higher of the ADAS standard, or the driving support terminal200(and/or mobile device300) is not pre-limited to any particular level or levels of the ADAS standard, the driving program the driving support terminal200may implement may include either one or both of a driving assistance program and an autonomous driving program. In the description below, an example in which the driving support terminal200has such greater memory and processing capabilities and capable of implementing functionalities of both of a driving assistance program and an autonomous driving program will be described for ease of description. In an example, the electronic control unit121may include such a memory and processor configuration with the example lower memory and processing capabilities for providing the advanced driving support functionalities of the second level or lower of the ADAS standard, while the driving support terminal200(and/or the mobile device300) has the greater memory and processing functionalities of at least the third or higher levels of the ADAS standard, and may include either or both of a driving assistance program and an autonomous driving program. The driving support terminal200(and/or the mobile device300) may request or otherwise obtain information of the driving informative, assistive, and control capabilities of the electronic control unit121and may selectively, e.g., based on user control or automatically, supersede or supplement the informative, assistive, and/or control functionalities of the electronic control unit121based on comparison of the functionalities of the driving program of the driving support terminal200(or mobile device300) and the functionalities of the driving program of the electronic control unit121, and if any of the functionalities of the driving program of the driving support terminal200(or mobile device300) are of a higher ADAS level or provide additional or more functionality or features than the corresponding functionalities of the driving program of the electronic control unit121, then those functionalities of the electronic control unit121may be superseded by the information and/or control signaling of the driving support terminal200(or mobile device300). Alternatively, if the driving program of the electronic control unit121provides same informative or assistive functionalities as the driving program of the driving support terminal200(and/or mobile device300), but does not provide an autonomous driving program, and the driving program of the driving support terminal200(and/or mobile device300) includes an autonomous driving program, then the driving program of the electronic control unit121may be supplemented by the autonomous driving program of the driving program of the driving support terminal200(or mobile device300). Thus, the driving support terminal200(and/or mobile device300) may apply a driving program that includes a driving assistance program and an autonomous driving program that respectively consider the image information (or the aforementioned pre-processing image information, such as from a corresponding managing module) provided from the camera unit110, the travelling information and the operating information of the vehicle obtained from the communications port140, and the external environment information obtained from the telematics control unit130and may generate various driving support information and/or driving control signaling. As discussed further below, either or both of the driving assistance program and the autonomous driving program may include one or more AI models, such as through implementation of neural networks, or other machine learning implementations. In addition, in examples, the driving assistance program and the autonomous driving program may share artificial intelligence processes or models, such as where an object detection is performed with respect to considered image information using an example trained neural network object detection model that is trained to output information that is considered by both the driving assistance program and the autonomous driving program for different respective functionalities or trained to output respective different information that the two different programs respectively consider, such as where a functionality provided by the driving assistance program may be performed based a first feature or corresponding resultant aspect, probability, classification, etc., of an object in an image and the autonomous driving program may be performed based on a second feature or corresponding resultant aspect, probability, classification, etc., of the object in the image. In such an example trained neural network object detection model, the image information may be input to a first layer of the neural network and then a series of convolutional neural network layers for feature extraction, which may be followed by a number of feed forward neural network layers or other recurrent, bi-directional recurrent, long-short term memory (LSTM) layers, etc., depending on the purposed objective for the neural network model during the training of the model, and the functionality performed dependent on the results of such a neural network model. Such a neural network model may also be utilized for multiple functionalities, and multiple such neural network models collectively utilized for a single functionality. Though examples are discussed above with respect to the artificial intelligence model having various neural network configuration, other neural network configurations and other machine learning models are also available, noting that examples are not limited to the examples herein. For example, the driving support terminal200(or mobile device300) may sense peripheral obstacles by implementing a corresponding artificial intelligence model, as a non-limiting example, which may recognize a distance to the sensed peripheral obstacle. When the recognized distance is less than a reference distance based on an indication by the artificial intelligence model, the driving support terminal200may provide a collision warning, in another example the artificial intelligence model may also be trained to generate a control signal in such a situation to automatically implement emergency braking or such a signaling may be generated based on the determination to issue the collision warning. In another example, the artificial intelligence model may alternatively be trained to generate the control signal for such emergency braking when the recognized distance is less than a shorter reference distance and/or based on other considered factors. The driving support terminal200may sense whether the vehicle has departed from a lane, e.g., using the same or another artificial intelligence model, and when the vehicle has departed from a lane, the driving support terminal200may determine to provide a departure warning or the same may be automatically issued by the artificial intelligence model's determination, and similarly the artificial intelligence model may simultaneously also issue assistive driving control signaling or be trained to issue the assistive driving control signaling upon a determined greater lane departure. The driving support terminal200may recognize pedestrians around the vehicle though such an aforementioned object detection artificial intelligence model or another artificial intelligence model, which may be trained to predict a future moving route of detected pedestrians, such as through recurrently or bi-directionally connected layers or LSTM layers, or similar functionality components of the artificial intelligence model, to support the driving of a user, and similar to above the artificial intelligence model may be trained for obstacle avoidance, i.e., to avoid the predicted future pedestrians, and to accordingly issue control signaling to control steering, acceleration, and/or braking to avoid the predicted future pedestrians, or such avoidance may otherwise be controlled by driving program of the driving support terminal200. The driving support terminal200may also similarly through same or respective artificial intelligence models support a function of maintaining and changing a lane and may support the driving on a crossroad without a lane, a general road, an off-road, etc., by issuing drive control signaling to the electronic control unit121to control the electronic control unit121to accordingly control the appropriate driving components of the vehicle, or by sending such drive control signaling directly to the controlled driving components of the vehicle. The driving support terminal200(and/or mobile device300) may determine an indication of malfunction of the vehicle using the operating information of the vehicle, and may suggest a cause of the malfunction and a solution to the user. Further, the driving support terminal200(and/or mobile device300) may control the body control module122depending on external environment information, such as an example where the body control module122may control an operation of the air conditioning system of the vehicle100based on an internal temperature being greater than a first threshold while the exterior temperatures is greater than a second threshold. In addition to such included processors and memories, the driving support terminal200(and/or the mobile device300) is representative of a display and user interface that may be controlled to display a preference place on a current travelling route of the vehicle depending on predetermined user information, and may provide an analysis of a price of the preference place and a reservation service, and/or control display of the same to the information display system of the vehicle100. Above, various examples are discussed with respect to capabilities and operations of the driving support terminal200and the mobile device300with respect to driving programs that may be stored in the driving support terminal200and the mobile device300. Similar to the discussion of the supplementation and/or superseding of the driving program of the electronic control unit121, when the driving support terminal200is physically supporting the mobile device300and in communication therewith, such as through, wireless charging communication, Bluetooth, Wi-Fi, or a USB standard of a USB cabling or other proprietary connection therebetween, the highest ADAS level functionalities and/or additional or greater feature provision between the driving assistance programs and/or the autonomous driving programs among the electronic control unit121, the driving support terminal200, and the mobile device300may be selected, e.g., by a user or automatically, and any select and/or otherwise automatically selected highest/greatest informative and assistive functionality and/or autonomous driving control functionality may be ultimately provided, and the non-selected informative and assistive functionalities and/or autonomous driving control functionalities not provided, for example. Respective scoring of the functionalities in driving programs may be predetermined for example, such that a functionality in one driving program having a score higher than a corresponding functionality in another driving program may be considered to have the highest or greatest functionality. Additionally, as noted above, while the driving support terminal200may have the example lower memory and processing capable processor configuration pre-limited for a zeroth through second ADAS level, the mobile device300may have the higher memory and processing capable processor configuration and may be pre-limited to a third through fifth ADAS level, such that different memory/processing configurations may be performed in parallel or simultaneously with their respective driving programs so all levels of driving assistance and autonomous control may be provided, e.g., with the driving support terminal200implementing an assistance driving program and the mobile device300implementing an autonomous driving program, where the corresponding information and control signaling is provided from the driving support terminal200(or directly from the mobile device300or from the mobile device300through the driving support terminal200) to the communications port140, and either then, through the CAN network, directly provided to corresponding components or directly provided to the electronic control unit121, for the provision of the provided information and assistive and autonomous driving control by the driving support terminal200and the mobile device300. As another example, when the driving support terminal200does not store a driving program or stores a driving program with only low informative or assistive functionalities, then a higher or greater functionality driving assistance program and autonomous driving program of the driving program of the mobile device300may be implemented only, with the driving support terminal200acting merely as the intermediary and translator, e.g., to receive the aforementioned coded control information and convert the same into a form and format compatible with the CAN network for communication to the electronic control unit121or for direct control of the components of the vehicle, or act as merely an intermediary to receive and pass through to the communications port140such control signaling from the mobile device300already in the form and format of the CAN network for the communication to the electronic control unit121or for the direct control of the components of the vehicle. Similarly, in an example, the driving support terminal200may not include processing capabilities for implementing any driving program, and may only perform the intermediary function, as well as a charging function with the mobile device300, e.g., through wireless charging. FIG.2is a block diagram illustrating a driving support system according to one or more embodiments. A driving support system inFIG.2may be similar to the driving support system illustrated inFIG.1, and thus, overlapping descriptions will not be repeated, and differences will be described. Referring toFIG.2, the driving support system may further include a data processing module150. InFIG.2, the data processing module150may be a hardware component included in a vehicle100. A camera unit110may provide an obtained image to the data processing module150. As an example, the camera unit110may provide a generated image signal to the data processing module150through a low voltage differential signaling (LVDS) interface. The data processing module150may be connected to the communications port140, and may obtain travelling information and operating information. As an example, the communications port140and the data processing module150may be connected to each other through the CAN network, noting that while examples herein discuss such connectedness between components in the vehicle being provided by such a CAN network with CAN communications protocols, examples are not limited thereto. The data processing module150may process an image provided from the camera unit110using the travelling information and the operating information, for example. For example, when the data processing module150is implemented as a navigation device used in a vehicle, the data processing module150may include a navigation program, and the data processing module150may apply the navigation program to the image provided from the camera unit110and the travelling information provided from the electronic control unit121to generate post-processing data. The data processing module150may output the post-processing data via a display and speakers of the vehicle. As another example, when the data processing module150includes a driving assistance program, the data processing module150may apply the driving assistance program to the image provided from the camera unit110and the travelling information provided from the electronic control unit121and may generate post-processing data. The data processing module150may make available or provide the generated post-processing data to the communications port140, and may make available or provide the image provided from the camera unit110to the communications port140. The driving support terminal200may thus obtain or receive the image and/or the post-processing data via the communications port140. The driving support terminal200may determine whether to apply a driving program stored in the driving support terminal200, e.g., depending on the post-processing data generated in the data processing module150. For example, when only the navigation program is applied to the post-processing data generated in the data processing module150without a driving assistance program by the vehicle100, the driving support terminal200may apply a driving program of the driving support terminal200, e.g., the corresponding driving assistance program and autonomous driving program of the driving program, to the received data. However, when a driving assistance program is applied by the vehicle, e.g., by the electronics control unit121, to the post-processing data provided from the data processing module150, the driving support terminal200may only apply the autonomous driving program of the driving program of the driving support terminal200to the received data. In other words, when a driving assistance program is applied to the post-processing data by the vehicle, the driving support terminal200in an example may apply only an autonomous driving program to the received data without applying the driving assistance program such that a system source may be efficiently used. Even when the driving assistance program is applied to the post-processing data, the driving support terminal200may compare the driving assistance program of the data processing module150with a driving assistance program of the driving support terminal200, and when a function of the driving assistance program of the driving support terminal200is improved or additional functionalities provided over the driving assistance program of the data processing module150, the driving support terminal200may apply the driving assistance program to the received data, e.g., the received image or processed image data independent of image data processed to which the driving assistance program of the data processing module150was applied. In such an example, the driving support terminal200may also predetermine such differences on functionalities, and provide an instruction or control signaling to the data processing module150to not implement the driving assistance program of the data processing module150and merely forward the image and/or image data otherwise processed by the data processing module150. FIG.3is a block diagram illustrating a driving support system according to one or more embodiments. A driving support system inFIG.3may be similar to the driving support system illustrated inFIG.2, and thus, overlapping descriptions will not be repeated, and differences will be described. FIG.2illustrates an example in which a driving support terminal200is directly connected to a communications port140. Referring toFIG.3, however, the driving support terminal200may be directly connected to the data processing module150and may obtain an image obtained in a camera unit110, travelling information and operating information of a vehicle generated in an electronic control unit121, and post-processing data generated in a data processing module150. The driving support terminal200and the data processing module150may be interconnected with each other via a second connection terminal (Type_C). The driving support terminal200and the data processing module150each may include a second connection terminal (Type_OBD). As an example, the driving support terminal200and the data processing module150may be interconnected with each other via the C-type USB connection terminal. Here, while the aforementioned mobile device300has not been discussed with respect toFIGS.2and3, this discussion similarly applies to the driving program of the mobile device300, and are also applicable to the aforementioned (and below) examples of the cooperative considerations of the driving support terminal200connected to the mobile device300, where either or both of the driving support terminal200and the mobile device300may store and be respectively configured to implement their respective driving programs or selective components and functionalities of the same. FIGS.4and5are block diagrams illustrating a driving support system according to one or more embodiments. A driving support system inFIGS.4and5may be similar to the driving support system illustrated inFIGS.1to3, and thus, overlapping descriptions will not be repeated, and differences will be described. Referring toFIG.4, the camera unit110in the example inFIG.1or the camera unit110and the data processing module150in the examples inFIGS.2and3may be additionally or alternatively implemented by the mobile device300, including a smartphone or tablet, in differing examples. The mobile device300and a communications port140may be interconnected with each other through wireless communication such as Wi-Fi communications, for example, noting that examples are not limited thereto. In any of the examples herein, the mobile device300and a communications port140may also similarly be wirelessly connected to each other through the aforementioned example controller area network (CAN) communication. Thus, functions of the camera unit110or the camera unit110and the data processing module150may also, or additionally, be performed by a camera and a processor employed in the mobile device300. Referring toFIG.4, a driving support terminal200may be directly connected to the mobile device300via a second connection terminal (Type_C). Referring toFIG.5, a driving support terminal200may be directly connected to the communications port140via a first connection terminal (Type_OBD). FIG.6is a block diagram illustrating a driving support system according to one or more embodiments. A driving support system inFIG.6may be similar to the driving support system illustrated inFIGS.4and5, and thus, overlapping descriptions will not be repeated, and differences will be described. In addition, discussion with respect toFIGS.1-3with respect to the driving program of the mobile device300and the driving support terminal200are also applicable to the driving support system ofFIG.6. Referring toFIG.6, a mobile device300in the example inFIG.6may thus include the functions and functionalities of the driving support terminal200illustrated inFIGS.4and5. Thus, the functions and functionalities of the driving support terminal200illustrated inFIGS.4and5, such as the implementation of the driving program of the driving support terminal200or mobile device300, may be performed by one or more processors employed in the mobile device300. For example, in different examples, the one or more processors may have the aforementioned different lower memory and processing capability configurations or higher memory and processing capability configurations, respectively of the respective zeroth through second ADAS level or the third through fifth ADAS level, or the one or more processors may have still greater memory and processing capability configurations than such discussions and not be pre-limited to a particular driving assistance or autonomous driving functionality grouping of ADAS levels. In addition, herein, the driving program functions and functionalities may be implemented by a CPU, NPU, GPU, and/or other processor(s) of the mobile device300, such as an example where any or any combination of such processors load corresponding instructions and/or artificial intelligence models, e.g., as one or more neural network stored parameters, and uses a software development kit (SDK) and/or an application programming interface (API) that enables an example artificial intelligence model to run on the mobile device300, such as with the particular processors of the mobile device300and the operating system of the mobile device300. For example, the artificial model may be input to a model conversion tool to convert the artificial intelligence model into a deep learning container (DLC) format, then optimized using optimization tools, that generates a deep learning container format file that can be executed by an artificial intelligence application, for example. As a more particular example, the model conversion tools may be part(s) of a Neural Processing Engine (NPE) SDK (e.g., a Snapdragon Neural Processing Engine (SNPE)), which may convert such artificial intelligence models into a form and file format that can be executed by the NPE of the mobile device300with accelerated runtime operation with one or more processors of the mobile device300, e.g., by one or more Qualcomm Snapdragon processors in the SNPE example. Such conversion may be performed by the mobile device300, or by a remote device or server and stored in the converted format in the mobile device300for selective execution in accordance with the driving program instructions stored and executed by the mobile device300. As another example, herein, example artificial intelligence models may be generated in, stored as, or converted into/from any of various formats, such as CaffeEmit, CNTK, CoreML, Keras, MXNet, ONNX, PyTorch, TensorFlow, and iOS formats for execution by such processor(s) of the mobile device300. In another or additional example, a device driver level may be implemented by the operating system of the mobile device300, where the artificial intelligence model is selectively executed by any or any combination of such processors through control of the device driver level, such as in an example that implements a Huawei HiAi Engine or hetrogenous computing system (HiHCS). In an example, the mobile device300may be interconnected with both a telematics control unit130and a communications port140through wireless communications, such as Wi-Fi communications, for example, and may obtain information of the vehicle100. The discussed available configurations and capabilities of the mobile device300discussed here with respect toFIG.6are also applicable to all references herein to such mobile devices300in various examples discussed with respect to any or all other figures, and such further or alternate discussions of available configurations and capabilities discussed with such any or all other figures are also applicable to the mobile device300ofFIG.6. FIG.7is a block diagram illustrating driving support terminal according to one or more embodiments. A driving support terminal200in the example inFIG.7is a terminal that may provide an advanced driving support function, e.g., of the second ADAS level or lower, and the driving program of the driving support terminal200may include a driving assistance program, and for example, may thereby provide or support a forward collusion warning (FCW) function, a lane departure warning (LDW) function, a lane keeping assist (LKA) function, a blind spot warning (BSW) function, an adaptive cruise control (ACC) function, a travelling road recognizing function, a traffic signal recognizing function, and the like. Referring toFIG.7, a driving support terminal200examples herein may include an artificial intelligence processor210, a memory unit220, a communication module230, and a communication terminal240, and may further include a power terminal250, as non-limiting examples. The artificial intelligence processor210may include a central processing unit211(CPU), a neural processing unit (NPU)212, and an interface213. For example, in the NPU212may be a neuromorphic processor. The CPU211, the NPU212, and the interface213may be electrically connected to each other. The CPU211and the NPU212of the artificial intelligence processor210may be connected to the communication module230, the communication terminal240, and the power terminal250through the interface213. The memory unit220may store the driving program including a driving assistance program. The memory unit220may include a plurality of memories221and222storing the driving assistance program. The CPU211and the NPU212may be respectively connected to the plurality of different memories221and222. Accordingly, upon execution of the driving assistance program stored in the memories221and222, the CPU211and the NPU212may generate driving support information, for example. In an example, the plurality of memories221and222each may be implemented as a high bandwidth memory (HBM). The communication module230may include a Wi-Fi communications module and may perform wireless communications with one or more external devices. For example, the communication module230may perform wireless communication with a telematics control unit130of a vehicle, as well as the mobile device300in any of the examples herein. In any of the various examples herein, a Wi-Fi communications module may also be included in a camera unit110and a communications port140, and in this case, the communication module230may perform wireless communication with the camera unit110and the communications port140. The communication terminal240may be used as a path for receiving and transmitting data, such as to/from the vehicle and to/from the mobile device300. For example, the communication terminal240may have a shape of a C-type USB connection terminal and an OBD-type connection terminal. Similar to above, in an example, the mobile device may include a similar artificial intelligence processor210, memory unit220, communication module230, and communication terminals. The power terminal250may be connected to a battery of the vehicle, e.g., through the OBD connection or otherwise, and may provide power provided from the battery of the vehicle to each of the elements of the driving support terminal200. In differing examples, the driving support terminal200may further include a power managing module (PMIC)260adjusting voltage of the power provided from the vehicle. The driving support terminal200may further include a transmitting coil270providing the power provided from the battery of the vehicle to an external device in a wireless manner. In this case, the driving support terminal200may be implemented as including a stand or a supporting form, for example, as illustrated inFIG.10, to support and wirelessly transmit power, e.g., through inductive or resonant coupling, to the mobile device300illustrated inFIGS.4and5, which is similarly configured for receipt of the wirelessly transmitted power through complementary inductive or resonant coupling. In an example, the driving support terminal200may include the power terminal250such that the driving support terminal200receives power from the battery of the vehicle, though examples are not limited thereto. In any of the various examples herein, the driving support terminal200may also include a receiving coil and may also wirelessly receive power. FIG.8is a block diagram illustrating driving support terminal according to one or more embodiments. A driving support terminal200in an example withFIG.8may be a driving support terminal providing an advanced driving support function of the third level or higher prescribed in an ADAS standard, and the driving program of the driving support terminal200may include driving assistance program and an autonomous driving program, or may include only an autonomous driving program, and, in an example, may thereby provide or support an occluded object prediction function, a lane changing function, a pedestrian recognizing function, a road change recognizing function, a road sign recognizing function, and the like. The driving support terminal200inFIG.8may be similar to the driving support terminal200illustrated inFIG.7, and thus, overlapping descriptions will not be repeated, and differences will be described. Referring toFIG.8, a memory unit220inFIG.8may include a greater number of memories than the number of memories provided in the memory unit220illustrated inFIG.7. For example, in the case in which the memory unit220inFIG.7includes two memories, the memory unit220illustrated inFIG.8may include four memories. In the memory unit220, a CPU211and an NPU212may correspond to a plurality of memories, different from each other, and may lead a driving assistance program and an autonomous driving program. Referring toFIG.8, the driving support terminal200inFIG.7may include a single communication terminal240, whereas the driving support terminal200inFIG.8may include a plurality of communication terminals240a,240b,240c, and240d. The first communication terminal240amay be a C-type USB connection terminal and an OBD-type connection terminal. The second communication terminal240bmay be a connection terminal of CAN network, the third communication terminal240cmay be a connection terminal of gigabit multimedia serial link (GMSL), and the fourth communication terminal250dmay be a communication terminal of Ethernet. Similar to above, in an example, the mobile device300may include a similar artificial intelligence processor210, memory unit220, communication module230, and communication terminals. Comparing the driving support terminals200inFIGS.7and8, the driving support terminal200described with respect toFIG.7may provide an advanced driving support function of the second level or lower prescribed in ADAS standard, while the driving support terminal200in the example inFIG.8may provide an advanced driving support function of the third level or higher prescribed in ADAS standard. Thus, the driving support terminals200inFIGS.7and8may be different from each other, or be the same terminals with either memory and processing configuration or two respective driving support memory and processing configurations for providing such respective functionalities. In an example of the driving support terminal200inFIG.7, the CPU may have a specification of ARM Cortex A72x2 and A53x4 @1.5 GHz, and the NPU may have a processing speed of 10 peta floating point operations per second (TFLOPS). In such an example, the memory unit220may include a high bandwidth memory of 8 GB/512 bit, and as another example, the memory unit220may include a NAND Flash memory of 32 GB. The Wi-Fi communications module of the communication module230may provide at least a speed of 2.2 Gbps, for example. The communication module230may have a specification of IEEE 802.11ac/ax standard, 4×4 MU-MIMO, and 1024QAM, for example. In such an example, the driving support terminal200may have a power consumption level of 4 watts or less, and may have a size of 65×65 mm. In an example of the driving support terminal200inFIG.8, the CPU may have a specification of ARM Cortex A72x2 and A53x4 @3 GHz, and the NPU may have a processing speed of 100 peta floating point operations per second (TFLOPS). In such an example, the memory unit220may include a high bandwidth memory (HBM) of 32 GB/1024 bit. As another example, the memory unit220may include a NAND flash memory of 64 GB. The Wi-Fi communications module of the communication module230may provide at least a speed of 1 Gbps, for example. The communication module230may have a specification of IEEE 802.11ac/ax standard, 4×4 MU-MIMO, and 1024QAM. The example driving support terminal200may have a power consumption level of 6.6 watts, and may have a size of 180×120 mm. FIG.9is a diagram illustrating an example connection relationship between a driving support terminal and a controller of a vehicle according to one or more embodiments. Referring toFIG.9, a driving support terminal200may be connected to a controller of the vehicle, e.g., an electronic control unit121, via a third communication terminal240c. The driving support terminal200may be connected to the electronic control unit121via the third communication terminal240c, and may receive images of a camera unit110imaging a front region, side regions, and a rear region of the vehicle. In this case, the electronic control unit121may include a driving program that provides an advanced driving support function of the driving support terminal200. For the driving support terminal200to provide an advanced driving support function of a third level or higher prescribed in ADAS standard without delay, for example, a high-performance data processing speed and a high-performance data receiving and transmitting speed may be desired or required according to an ADAS standard, and thus a plurality of ports may thus be arranged to collect various data. However, depending on example processing, memory, and communication specifications, a small-sized driving support terminal200example may not fully satisfy the above described high performance aspects, and thus such smaller-sized driving support terminals may primarily implement the aforementioned example zeroth through second ADAS level functionalities, as non-limiting examples. In an example, a driving program may be embedded in the electronic control unit121, the vehicle may not include a driving program, or the driving program may include either or both of a driving assistance program and an autonomous driving program. In an example, instructions for assisting the interaction of the driving support terminal200and/or the mobile device300with the electronic control unit121may be uploaded, installed, or embedded in the electronic control unit121. For example, such instructions may assist the receipt and understanding by the electronic control unit121, of information and/or control signaling from the driving support terminal200and/or the mobile device300provided through the communications port140or otherwise provided through the CAN network, where such understanding of such information or control signaling may include the driving support terminal200forwarding such information or control signaling to the appropriate vehicle components being controlled, or the electronic control unit121replicating and forwarding the same to the appropriate vehicle components as if the electronic control unit121has implemented its own driving program, e.g., thereby controlling of a provision or display of information, e.g., audibly and/or visually, from the driving support terminal200and/or the mobile device300and/or thereby controlling providing other driving assistance or different levels of autonomy in driving control from the driving support terminal200and/or the mobile device300. The instructions may further include the addition of coding or controls, modifications, or deletions of current coding of the electronic control unit121for such receipt and understanding of such information or control signaling by the electronic control unit121, or for controlled passivity of the electronic control unit121to merely forward the same information or control signaling to the corresponding components of the vehicle, or for controlling the electronic control unit121to not interrupt or interfere with the same controls or control signaling provided directly to such corresponding components by the driving support terminal200or the mobile device300, e.g., by not issuing information or control signaling in accordance to the driving program of the electronic control unit121in addition to the information or control signaling from the driving support terminal200and/or the mobile device300, or otherwise preventing such information or control signaling from displaying (or audibly reproducing) the user/driver the received information or controlling the corresponding components with the control signaling. Such instructions may further include the addition of coding or controls, modifications, or deletions of current coding of the electronic control unit121to control a cooperation of a current assisted driving program or autonomous driving program, or of select functionalities of either such program of the driving program of the electronic control unit121with the assisted driving program and/or autonomous driving programs, or of select functionalities of the same, of the driving support terminal200and/or the mobile device300, as discussed above. This uploading, installing, or embedding may be through an automated process upon an initial connection of the driving support terminal200and/or the mobile device300to the communications port140or execution of a corresponding application or program of the driving support terminal200or mobile device300to begin or continue providing such driving assistance/control functions, or through user selection or authorization to upload, install, or embed the instructions to the electronic control unit121, such as through an example user interface of the driving support terminal200or mobile device300. Additionally, or alternatively, a separate control module may be inserted into the vehicle to perform a translation or handoff operation between the electronic control unit121and such information and control signaling from the driving support terminal200or mobile device300. Such instructions uploaded, installed, or embedded in the electronic control unit121or that provide the translation or handoff operations through the separate control module may also provide similar interfacing functions to provide vehicle and sensed environmental information to the driving support terminal200and/or the mobile device for generation of such controls by the driving support terminal200and the mobile device300. Still further, in an example, the electronic control unit121may be replaced with a same or compatible electronic control unit121that already includes such aforementioned instructions so the same is not necessary to be to uploaded, installed, or embedded. FIG.11is a block diagram illustrating an implementation of a driving program using an NPU according to a one or more embodiments. An NPU212may implement an image receiver212a, an image pre-processor212b, target recognizing model212c, and driving support information generator212d, and, thus, may recognize a target in an image obtained in a camera unit110and generate driving support information specific to that recognized target. The image obtained in the camera unit110may be input to the image receiver212a. The image pre-processor212bmay perform image processing to assign a region of interest (ROI) within the input image, and may generate therefrom an interest image. As an example, the image pre-processor212bmay extract edge components of the input image, such as through high frequency filtering, to detect different regions of interest and may generate corresponding interest images. For example, the image pre-processor212bmay perform normalization on a generated interest image, e.g., to have a predetermined size, such as a size that one or more subsequent artificial intelligence models have been trained. The image pre-processor212bmay still further perform calibration on a brightness of the normalized interest image. As an example, a brightness value of the normalized interest image may be calibrated using a zero-center approach. The image pre-processor212bmay provide the normalized-calibrated interest image to the target recognizing model212c, which may be representative of example trained neural network system having a plurality of convolutional layers followed by an example feed forward, RNN, or other classifying trained layers, for example. The target recognizing model212cmay implement a neural network system using stored parameters of the various layers of the neural network system, e.g., parameters that have been previously trained through deep-learning, such as through loss-based back propagation, with respect to training images for the particular purpose of the example target, and thus, upon loading the parameters from the memory, processing elements of the NRU212to analyze the pre-processed interest image through multiple neural network layers, the NRU212may recognize a target in the input image. As an example, the target recognizing model212cmay repeatedly implement the neural network system for recognizing the target in every received frame and/or for multiple detected ROIs of the input image and may recognize a target with a significantly high accuracy. Multiple such neural network systems may be performed in parallel, with each system being trained for recognizing a respective target, or the neural network system may be configured to perform a classification of the input image and recognize various trained targets. For example, the targets may include a traffic signal, a lane, a pedestrian, a road sign, and the like, such that the neural network system or respective neural networks systems are trained using various training images until the neural network system or respective neural network systems recognize the correct target within a predetermined accuracy or predetermined inaccuracy. The driving support information generator212dmay implement the driving program, or components of the driving program. For example, the driving support information generator212dmay implement at least one of a driving assistance program and an autonomous driving program of the driving program, read from a memory, with respect to the target recognized in the target recognizing model212c, and may generate driving support information specific to the recognized target, e.g., specific to the location of the recognized target as well as the relationship of the recognized target and other objects, etc. Herein, for each respective discussion, the described vehicle means a vehicle having engineered capabilities to implement at least the corresponding described driving assistance functionalities, and in examples discussing implementations of autonomous driving programs of a particular level or particular driving assistance functionality, then the corresponding vehicle is a vehicle with engineered capabilities for such corresponding functionalities. Alternatively, though the vehicle is a vehicle with at least engineered capabilities to implement at least one driving assistance functionality, examples include the driving support terminal200and/or the mobile device300determining the functionality of the vehicle and either not enabling functionalities of a driving program of the driving support terminal200and/or the mobile device300for which the vehicle is not capable of performing, or may disable or not implement the corresponding functionalities of the driving program of the driving support terminal200and/or the mobile device300the vehicle is not capable of performing. FIGS.12A and12Brespectively are a cross-sectional diagram and a plan diagram illustrating a driving support terminal according to one or more embodiments. Referring toFIG.12, a driving support terminal200in an example may include a substrate280, an artificial intelligence processor210, a memory unit220, a communication module230, a communication terminal240, and a power managing module260, and a housing290, as non-limiting examples. The artificial intelligence processor210, the communication module230, the communication terminal240, and the power managing module260may be arranged on the substrate280. The communication module230and the power managing module260may be arranged on one surface of the substrate280, and the artificial intelligence processor210may be arranged on the other surface of the substrate280. The communication terminal240may be arranged on the artificial intelligence processor210. An exterior of the driving support terminal200may be formed by the housing290, and the communication terminal240may extend and protrude in one direction to be connected to an external device, as a non-limiting example. The artificial intelligence processor210may include, or load from a memory, a driving program, which may include an assistance program and/or an autonomous driving program, and which may including loading trained parameters of corresponding artificial intelligence models implemented by the respective programs, or selectively loading such parameters as needed, and may generate driving support information and driving assistance and autonomous driving control using data provided through the communication terminal, e.g., from the communications port of the vehicle and/or from additional cameras, such as of the mobile device300, as well as other cameras connected to the mobile device300, such as through a USB connector of the mobile device300. The artificial intelligence processor210may include a central processing unit (CPU) and a neural processing unit (NPU), and may be integrated with the memory unit220, as non-limiting example. The communication module230may include a Wi-Fi communications module and may be configured to perform wireless communication with an external device. As an example, the communication module230may perform wireless communication with the telematics control unit130illustrated inFIG.2, as well as one or more cameras, and the mobile device300. The power managing module260may adjust voltage provided from a battery of a vehicle when the power managing module260is connected to the vehicle. The communication terminal240may be directly connected to an external device and may be used as a path for transmitting and receiving data, e.g., directly connected to the communication port of the vehicle and/or connected to the mobile device300. InFIG.12, the communication terminal240may have a shape of a C-type USB connection terminal, but as described above, the communication terminal240may have a shape of an OBD-type connection terminal. FIG.13is a block diagram illustrating a driving support system according to an example embodiment,FIGS.14and15are diagrams illustrating a calibration operation performed by an image processor of the present disclosure, andFIGS.16and17are diagrams illustrating an operation of controlling a camera unit performed by an artificial intelligence processor of the present disclosure. As the driving support system inFIG.13is similar to the driving support system inFIGS.1to12, overlapping descriptions will not be repeated and differences will mainly be described. Referring toFIG.13, the camera unit110in the example embodiment illustrated inFIG.1or the camera unit110and the data processing module150in the example embodiment illustrated inFIGS.2and3may be alternatively implemented by a mobile device300including a smartphone or a tablet. Accordingly, functions of the camera unit110or the camera unit110and the data processing module150may be performed by a camera unit and a controller employed in the mobile device300. Referring toFIG.13, the mobile device300may include a controller310, a communication unit320, a camera unit330, a display unit340, a power supply unit350, and a power switch360. The controller310may be electrically or functionally connected to the other blocks of the mobile device300and may control overall operations of the mobile device300and signal flows between internal blocks of the mobile device300, and may process data. The controller310may include a central processing unit (CPU), an application processor, a graphics processing unit (GPU), and the like. The communication unit320may be interconnected with an external cloud400through a network. The network may refer to a communication network formed using a predetermined communication method. The predetermined communication method may include all communication methods, such as communication through a predetermined communication standard, a predetermined frequency band, a predetermined protocol, or a predetermined channel. For example, the communication method may include a communication method through Bluetooth, BLE, Wi-Fi, Zigbee, 3G, 4G, 5G, and ultrasonic waves, and may include near-field communication, long-distance communication, wireless communication, and wired communication. Meanwhile, the vehicle100may be interconnected with the mobile device300through wireless communication. For example, a communication port140(seeFIGS.1to3) of the vehicle100may be interconnected with the communication unit320of the mobile device300through wireless communication such as Wi-Fi communication. As another example, the communication port140(seeFIGS.1to3) may be wirelessly connected to the communication unit320of the mobile device300through controller area network (CAN) communication. The camera unit330may be mounted on a front surface or a rear surface of the mobile device300and may image an external area of the mobile device300. The camera unit330may provide generated images to the controller310. The display unit340may be provided on the front surface of the mobile device300and may output information according to a user input on a screen. For example, the display unit340may be integrated with a touch screen device for receiving a user touch input. The power supply unit350may be electrically connected to the other blocks of the mobile device300, and may supply power required for driving the other blocks. For example, the power supply unit350may include a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell. The power supply unit350may be integrally provided in the mobile device300or may be provided detachably from the mobile device300. The power switch360may provide power provided from the power supply unit350to a driving support terminal200. The controller310may detect whether the driving support terminal200is connected to the mobile device300, and may control a power providing operation of the power switch360. For example, when the driving support terminal200is connected to the mobile device300, the controller310may control the power switch360to be turned on and may provide power to the driving support terminal200, and, for example, when the driving support terminal200is not connected to the mobile device300, the controller310may control the power switch360to be turned off and may block power supply to the driving support terminal200. Referring toFIG.13, the driving support terminal200may include an artificial intelligence processor210, an image processor205, a model storage unit215, a learning processor225, and a power managing module260, as a non-limited example. The driving support terminal200may be directly connected to the mobile device300through a second connection terminal Type_C. The power managing module (PMIC)260may adjust a voltage of power supplied from the mobile device300. The power managing module260may be electrically connected to the other blocks of the driving support terminal200and may supply power required for driving the other blocks. The image processor235may pre-process an image provided from the camera unit330of the mobile device300. When the driving support terminal200is interconnected with the mobile device300through the second connection terminal (Type_C), the image processor235may receive an image from the camera unit330of the mobile device300. According to an example embodiment, the controller310of the mobile device300may primarily pre-process the image provided from the camera unit330, and the image processor235may secondarily pre-process the primarily pre-processed the image. To this end, the controller310of the mobile device300may include a separate image processor. The image processor235may correspond to the image pre-processor212billustrated inFIG.11. As an example, the image processor235may perform image processing to allocate a region of interest (ROI) within the image provided from the camera unit330, and may generate an interest image therefrom. As an example, the image processor235may detect different regions of interest by extracting an edge component of the input image through high frequency filtering, and may generate an interest image corresponding to the different regions of interest. For example, the image processor235may perform normalization on the generated region of interest such that one or more subsequent artificial intelligence models have a predetermined size equal to a trained size. The image processor235may also perform additional calibration for brightness of the normalized interest image. When the image processor235calibrates brightness of the normalized interest image, a brightness value of the interest image illustrated inFIG.14(a)may be appropriately reduced such that object visibility may improve as in the interest image illustrated inFIG.14(b). As an example, the brightness value of the normalized interest image may be calibrated using a zero-center method. Meanwhile, the image processor235may perform additional calibration by rotating the normalized interest image. When the image processor235performs calibration by rotating the interest image illustrated inFIG.15(a)in a clockwise or counterclockwise direction, a horizontal side and a vertical side of the interest image illustrated inFIG.15(b)may be changed to be disposed in horizontal and vertical directions. The image processor235may provide the normalized and calibrated interest image to the artificial intelligence processor210. The artificial intelligence processor210may use the interest image provided from the image processor235as input data. The artificial intelligence processor210may be electrically or functionally connected to the other blocks of the driving support terminal200, and may control overall operations of the driving support terminal200and signal flows between the internal blocks of the driving support terminal200, and may process the data. The artificial intelligence processor210may include an artificial neural network processing unit. An artificial neural network is a modeling the operating principle of biological neurons and the relationship between the neurons, and is an information processing system in which a plurality of neurons called nodes or processing elements are connected in the form of a layer structure. An artificial neural network are a model used for machine learning, a statistical learning algorithm inspired by biological neural networks (the central nervous systems of animals, especially the brain) in machine learning and cognitive science. Concretely, an artificial neural network refers to an overall model having a problem-solving ability by changing the strength of synaptic bonding through learning by artificial neurons (nodes) which form a network by combining synapses. The neural network may include a plurality of layers, and each of the layers may include a plurality of neurons. Also, an artificial neural network may include synapses that connect neurons. In general, an artificial neural network may be defined by an activation function that generates an output value from the following three factors, which are, a connection pattern between neurons of different layers, a learning process to update the weight of the connection, and a weighted sum of inputs received from the previous layer. An artificial neural network may include network models such as a deep neural network (DNN), a recurrent neural network (RNN), a bidirectional recurrent deep neural network (BRDNN), a multilayer perceptron (MLP), and a convolutional neural network (CNN), but is not limited thereto. An artificial neural network is classified into single-layer neural networks and multi-layer neural networks according to the number of layers. An artificial neural network may be trained using training data. Here, learning refers to a process of determining parameters of an artificial neural network using training data in order to achieve the purpose of classifying, regressing, or clustering input data. As a representative example of parameters of artificial neural networks, there may be weights applied to synapses or biases applied to neurons. The artificial intelligence processor210may receive input data and training data for model training. The artificial intelligence processor210may apply input data provided from the image processor205to a trained model stored in the model storage unit215and may infer output data. Meanwhile, the artificial intelligence processor210may control at least one block provided in the mobile device300according to input data corresponding to the interest image provided from the image processor205. As an example, as illustrated inFIG.16(a), when a flicker phenomenon is included in the interest image, a shutter speed of the camera unit330may be changed through the controller310, thereby removing a flicker phenomenon included in the interest image as illustrated inFIG.16(b). As another example, as illustrated inFIG.17(a), when screen brightness of the interest image is dark, an aperture value of the camera unit330may be changed through the controller310, thereby appropriately changing screen brightness of the interest image as illustrated inFIG.17(b). The learning processor225may train (or learn) an artificial neural network stored in the model storage unit215using training data provided from the artificial intelligence processor210. The artificial intelligence processor210may transmit training data provided from the communication unit320of the mobile device300to the learning processor225. The communication unit320may download training data from the cloud400. When the driving support terminal200is interconnected with the mobile device300through the second connection terminal (Type_C), the artificial intelligence processor210may download the training data through the communication unit320of the mobile device300. That is, when the driving support terminal200is interconnected with the mobile device300through the second connection terminal Type_C, the training data may be updated. The driving support terminal200may be directly connected to the mobile device300through the second connection terminal (Type_C). The artificial intelligence processor210may pre-process input data and training data and may generate pre-processed input data and pre-processed training data. For example, the pre-processing of input data, performed by the artificial intelligence processor210, may refer to extracting an input feature from the input data. The model storage unit215may store an artificial neural network. The artificial neural network stored in the model storage unit215may include a plurality of hidden layers. However, the artificial neural network in the example embodiment is not limited thereto. The artificial neural networks may be implemented by hardware, software, or a combination of hardware and software. When a portion or entirety of the artificial neural network is implemented by software, one or more command words configuring the artificial neural network may be stored in a memory. The artificial neural network stored in the model storage unit215may be learned through the learning processor225. The model storage unit215may store a model being trained or learned by the learning processor225. When the model is updated through learning, the model storage unit215may store the updated model. The model storage unit215may classify the learned model into a plurality of versions according to a learning time point or a learning progress and may store the models, if necessary. As an example, the model storage unit215may include the target recognizing model212cillustrated inFIG.11. The target recognizing model212cmay implement a neural network system using stored parameters of various layers of the neural network system, such as, for example, previously trained parameters through deep learning such as loss-based back propagation with respect to training for an image for a specific purpose of an example target. Accordingly, the target recognizing model212cmay, by uploading parameters from a memory, process components of the NRU212to analyze the pre-processed interest image through a plurality of neural network layers such that the NRU212may recognize a target in the input image. As an example, the target recognition model212cmay repetitively implement a neural network system to recognize the target for each received frame and/or for a plurality of detected ROIs of the input image, and may recognize the target with very high accuracy. Such a plurality of neural network systems may be trained for each system to recognize a corresponding target and may be performed in parallel, or the neural network system may be configured to classify an input image and may recognize various trained targets. For example, the target may include traffic signals, lanes, crosswalks, road signs, and the like, such that the neural network systems or each neural network system may be trained using various training images until the neural network systems or each neural network system recognizes an accurate target within a predetermined accuracy or predetermined inaccuracy. Meanwhile, the driving support information generator212dinFIG.11may be implemented by the artificial intelligence processor210. The driving support information generator212dmay implement a driving program or components of a driving program. For example, the driving support information generator212dmay implement at least one of a driving assistance program and an autonomous driving program of a driving program read from the memory with respect to the target recognized in the target recognizing model212c, and may generate the driving support information specified for the recognized target, such as, for example, specified for the location of the recognized target and the relationship between the recognized target and other objects. The learning processor225may train (or learn) an artificial neural network stored in the model storage unit215using training data. The learning processor225may acquire training data provided from the artificial intelligence processor210and may learn the artificial neural network stored in the model storage unit215. For example, the learning processor225may determine optimized model parameters of the artificial neural network by repeatedly training the artificial neural network using various well-known learning techniques. By being trained using training data, an artificial neural network of which parameters are determined may be referred to as a training model or a trained model. The cameras, camera units, camera unit110, electronic control units, vehicle control unit120, electronic control unit121, body control module122, controllers, communications ports, communication port140, telematics control unit130, data processing modules, pre-processors, data processing module150, modules, terminals, managing modules, driving support terminals, driving support terminals200, mobile devices, mobile devices300, mobile phones, communication modules, communication module230, communication terminals240,240a-d, power terminal250, transmitting coil270, PMIC260, memory unit220, memories, memories221and222, connectors, neuromorphic processors, processors, NPUs, NPU212, CPUs, CPU211, interfaces, interface213, artificial intelligence processors, artificial intelligence processor210, and other apparatuses, terminals, modules, units, devices, and other components described herein with respect toFIGS.1-12Bare, and are implemented by, hardware components. Examples of hardware components that may be used to perform the operations described in this application where appropriate include controllers, sensors, generators, drivers, memories, comparators, arithmetic logic units, adders, subtractors, multipliers, dividers, integrators, and any other electronic components configured to perform the operations described in this application. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware, for example, by one or more processors or computers. A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. Hardware components implemented by a processor or computer may execute instructions or software, such as an operating system (OS) and one or more software applications that run on the OS, to perform the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of the instructions or software. For simplicity, the singular term “processor” or “computer” may be used in the description of the examples described in this application, but in other examples multiple processors or computers may be used, or a processor or computer may include multiple processing elements, or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or a processor and a controller, and one or more other hardware components may be implemented by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may implement a single hardware component, or two or more hardware components. A hardware component may have any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (SISD) multiprocessing, single-instruction multiple-data (SIMD) multiprocessing, multiple-instruction single-data (MISD) multiprocessing, and multiple-instruction multiple-data (MIMD) multiprocessing. The methods illustrated and discussed with respect toFIGS.1-11and that perform the operations described in this application are performed by computing hardware, for example, by one or more processors or computers, implemented as described above executing instructions or software to perform the operations described in this application that are performed by the methods. For example, a single operation or two or more operations may be performed by a single processor, or two or more processors, or a processor and a controller. One or more operations may be performed by one or more processors, or a processor and a controller, and one or more other operations may be performed by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may perform a single operation, or two or more operations. Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above. The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, as non-limiting blue-ray or optical disk storage examples, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers. While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure. | 108,698 |
11858529 | DETAILED DESCRIPTION This application describes techniques for applying a model to predict future states of an articulated object in an environment. The techniques can include implementing a computing device that receives data indicating presence of an articulated object (e.g., an object with joined portions that may articulate relative to each other) in an environment and predicts a position, a velocity, and/or an orientation, etc. of the articulated object (or portions thereof) at a future time. The model(s) may, for example, receive object state data associated with the articulated object at a first time, apply one or more filtering algorithms to representative portions of the articulated object, and output updated state data for the articulated object at a second time in the future. For example, the model may output predicted states of a tractor (e.g., a first portion) and a trailer (e.g., a second portion) in the future based at least in part on filtering techniques that identify mathematical relationships between the portions (e.g., a front portion and a rear portion relative to a direction of travel) of the articulated object. Predicted states of articulated object(s) determined by the model(s) may be considered during vehicle planning thereby improving vehicle safety as a vehicle navigates in the environment by planning to avoid the multiple portions of the articulated object. A first model used by an autonomous vehicle as described herein may be configured to determine presence of an articulated object in an environment based on sensor data from one or more sensors. A second model may determine a mathematical relationship between a front portion (a tractor) and a rear portion (a trailer), and predict states of the front portion and the rear portion based at least in part on the mathematical relationship. For example, the models may apply a filter to state data associated with the first portion to predict state data associated with the second portion. In this way, the model(s) can predict both portions of the articulated object more accurately and in less time versus predicting state data for both portions without consideration to the portions having a mathematical relationship by virtue of being joined as an articulated object. In some examples, functionality associated with the aforementioned first model and second model can be included in a single model (e.g., a model of a vehicle computing device that detects presence of an articulated object and predicts movement by the articulated object in real-time). Generally, the model(s) can predict and update states of an articulated object at future times by leveraging a relationship between the portions of the articulated object. In this way, the model(s) can be thought of as a “joined motion model” that predict motion of all portions of an articulated object (a front portion and additional connected rear portion(s)). For example, a first portion may have a propulsion system, a steering system, or the like, that directs where the first portion and the second portion will be in the future (e.g., a second portion may generally follow the first portion based on the two portions having a joint point, such as a connection between the tractor and the trailer). For this reason, the model can quickly predict a position, a velocity, a yaw rate, etc. of the second portion based on data (a current orientation, a current velocity, a current yaw rate, etc.) associated with the first portion. By implementing the techniques described herein, a computing device can make predictions (e.g., a trajectory, a position, a yaw, etc.) associated with an articulated object in less time and with more accuracy versus predicting all possible states for both portions of the articulated object separately, such as by models that do not identify the presence of articulated objects. In addition, predictions made by models as described herein use fewer processor and/or memory resources versus models that process all future possibilities for each object separately. By way of example and not limitation, consider an articulated object (e.g., a truck joined to a trailer) in an environment of a vehicle navigating to a destination. The truck and the trailer (or representations thereof) may each be associated with state data (e.g., one or more of: position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw rate data, or turning rate data, just to name a few) for a current time. The model can, for instance, determine a predicted state of the truck at different times in the future based on the state data of the truck, and also determine various states of the trailer in the future based on the state data of the truck. In such examples, the model can determine the predicted trailer states based on applying a filter (e.g., a Kalman filter) to the state data of the truck. In one specific example, an extended Kalman filter or an unscented Kalman filter can be used by the model to calculate, generate, or otherwise determine predicted states of all portions (the truck and the trailer) of the articulated object. By employing the articulated object tracking techniques described herein, predicting a future location, velocity, or trajectory of the articulated object can be performed without requiring processor and/or memory resources to evaluate all possible future locations, velocities, etc. given that the second portion is related to the first portion. In some examples, the model can determine, as a correlation, a characteristic (e.g., a first velocity, a first position, etc.) of the first portion and a characteristic (e.g., a second velocity, a second position, etc.) of the second portion. In such examples, the model can generate output data representing a predicted state of the first portion and the second portion based at least in part on the correlation. In some examples, the model can determine an offset value between two object representations, and use the offset value to predict states for one or both of the portions of the articulated object. For example, the model can receive state data of a first object representation (e.g., velocity of a truck), and predict a future velocity of the second representation (e.g., the trailer) based on the offset value. In some examples, the model can apply linear and/or non-linear algorithms to determine a covariance and/or a mean between one or more points of the first object representation and one or more points of the second object representation. The model may apply a filtering algorithm that detects a covariance between sampled points associated with each object representation, a velocity covariance, a yaw covariance, a position covariance, just to name a few. In such examples, the covariance between the object representations can be used by the model to output predicted states of both portions of the articulated object. The model can determine an offset value to represent a displacement or difference in a position (e.g., x, y, z space), a heading, a velocity, an acceleration, etc. between two or more object representations making up an articulated object. The displacement of a trailer from a tractor, for example, can be determined in less time and with more accuracy by using an offset value output by the model rather than treating the trailer as an object with infinite potential positions, headings, etc. Further, the model can use linear algebra and other mathematical algorithms that do not rely on derivative calculations which increase an amount of required computational resources. By implementing the model as described herein, computational resources to determine precited states of articulated objects can be reduced (versus not using the model) thereby enabling more processing and memory resources to be available to a computing device for other calculations, such as processing additional objects in the environment, which improves safety of the vehicle as it navigates in the environment. In various examples, a vehicle computing device may receive one or more instructions representative of output(s) from one or more models. The vehicle computing device may, for instance, send an instruction from the one or more models to a planning component of the vehicle that plans a trajectory for the vehicle and/or to a perception component of the vehicle that processes sensor data. Additionally or alternatively, output(s) from one or more models may be used by one or more computing devices remote from the vehicle computing device for training a machine learned model (e.g., to classify objects as an articulated object). In various examples, the vehicle computing device may be configured to determine actions to take while operating (e.g., trajectories to use to control the vehicle) based on one or more models determining presence and/or movement of articulated object(s). The actions may include a reference action (e.g., one of a group of maneuvers the vehicle is configured to perform in reaction to a dynamic operating environment) such as a right lane change, a left lane change, staying in a lane, going around an obstacle (e.g., double-parked vehicle, a group of pedestrians, etc.), or the like. The actions may additionally include sub-actions, such as speed variations (e.g., maintain velocity, accelerate, decelerate, etc.), positional variations (e.g., changing a position in a lane), or the like. For example, an action may include staying in a lane (action) and adjusting a position of the vehicle in the lane from a centered position to operating on a left side of the lane (sub-action). As described herein, models may be representative of machine learned models, statistical models, or a combination thereof. That is, a model may refer to a machine learning model that learns from a training data set to improve accuracy of an output (e.g., a prediction). Additionally or alternatively, a model may refer to a statistical model that is representative of logic and/or mathematical functions that generate approximations which are usable to make predictions. The techniques discussed herein may improve a functioning of a vehicle computing system in a number of ways. The vehicle computing system may determine an action for the autonomous vehicle to take based on an articulated object represented by data. In some examples, using the articulated object tracking techniques described herein, a model may predict articulated object trajectories and associated probabilities that improve safe operation of the vehicle by accurately characterizing motion of the articulated object with greater detail as compared to previous models. The techniques discussed herein can also leverage sensor data and perception data to enable a vehicle, such as an autonomous vehicle, to navigate through an environment while circumventing objects in the environment. In some cases, evaluating an output by a model(s) may allow an autonomous vehicle to generate more accurate and/or safer trajectories for the autonomous vehicle to traverse an environment. Techniques described herein can utilize information sensed about the objects in the environment to more accurately determine current states and future estimated states of the objects. For example, techniques described herein may be faster and/or more robust than conventional techniques, as they may increase the reliability of representations of sensor data, potentially alleviating the need for extensive post-processing, duplicate sensors, and/or additional sensor modalities. That is, techniques described herein provide a technological improvement over existing sensing, object detection, classification, prediction and/or navigation technologies. In addition to improving the accuracy with which sensor data can be used to determine objects and correctly characterize motion of those objects, techniques described herein can provide a smoother ride and improve safety outcomes by, for example, more accurately providing safe passage to an intended destination without reacting to incorrect object representations. These and other improvements to the functioning of the computing device are discussed herein. The methods, apparatuses, and systems described herein can be implemented in a number of ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle in some examples below, the methods, apparatuses, and systems described herein can be applied to a variety of systems. For example, any sensor-based and/or mapping system in which objects are identified and represented may benefit from the techniques described. By way of non-limiting example, techniques described herein may be used on aircrafts, e.g., to generate representations of objects in an airspace or on the ground. Moreover, non-autonomous vehicles could also benefit from techniques described herein, e.g., for collision detection and/or avoidance systems. The techniques described herein may also be applicable to non-vehicle applications. By way of non-limiting example, techniques and implementations described herein can be implemented in any system, including non-vehicular systems, that maps objects. FIGS.1-6provide additional details associated with the techniques described herein. FIG.1is an illustration of an example environment100in which one or more models determine presence of an articulated object. In the illustrated example, a vehicle102is driving on a road104in the environment100, although in other examples the vehicle102may be stationary and/or parked in the environment100. In the example, the road104includes a first driving lane106(1), a second driving lane106(2), a third driving lane106(3), a fourth driving lane106(4), and a fifth driving lane106(5) (collectively, the driving lanes106) meeting at an intersection or junction. The road104is for example only; techniques described herein may be applicable to other lane configurations and/or other types of driving surfaces, e.g., parking lots, private roads, driveways, or the like. The example vehicle102can be a driverless vehicle, such as an autonomous vehicle configured to operate according to a Level5classification issued by the U.S. National Highway Traffic Safety Administration. The Level5classification describes a vehicle capable of performing all safety-critical functions for an entire trip, with the driver (or occupant) not being expected to control the vehicle at any time. In such examples, because the vehicle102can be configured to control all functions from start to completion of the trip, including all parking functions, the vehicle may not include a driver and/or controls for manual driving, such as a steering wheel, an acceleration pedal, and/or a brake pedal. This is merely an example, and the systems and methods described herein may be incorporated into any ground-borne, airborne, or waterborne vehicle, including those ranging from vehicles that need to be manually controlled by a driver at all times, to those that are partially or fully autonomously controlled. The example vehicle102can be any configuration of vehicle, such as, for example, a van, a sport utility vehicle, a cross-over vehicle, a truck, a bus, an agricultural vehicle, and/or a construction vehicle. The vehicle102can be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, any combination thereof, and/or any other suitable power source(s). Although the example vehicle102has four wheels, the systems and methods described herein can be incorporated into vehicles having fewer or a greater number of wheels, tires, and/or tracks. The example vehicle102can have four-wheel steering and can operate generally with equal performance characteristics in all directions. For instance, the vehicle102may be configured such that a first end of the vehicle102is the front end of the vehicle102, and an opposite, second end of the vehicle102is the rear end when traveling in a first direction, and such that the first end becomes the rear end of the vehicle102and the second end of the vehicle102becomes the front end of the vehicle102when traveling in the opposite direction. Stated differently, the vehicle102may be a bi-directional vehicle capable of travelling forward in either of opposite directions. These example characteristics may facilitate greater maneuverability, for example, in small spaces or crowded environments, such as parking lots and/or urban areas. In the scenario illustrated inFIG.1, a number of additional vehicles also are traveling on the road104. Specifically, the environment100includes a first additional vehicle108(1), a second additional vehicle108(2), and a third additional vehicle108(3) (collectively, the additional vehicles108). AlthoughFIG.1illustrates only the additional vehicles108as entities traveling on the road104, many other types of entities, including, but not limited to, buses, bicyclists, pedestrians, motorcyclists, animals, or the like may also or alternatively be traveling on the road104and/or otherwise present in the environment100. The vehicle102can collect data as it travels through the environment100. For example, the vehicle102can include one or more sensor systems, which can be, for example, one or more LIDAR sensors, RADAR sensors, SONAR sensors, time-of-flight sensors, image sensors, audio sensors, infrared sensors, location sensors, etc., or any combination thereof. The sensor system(s) may be disposed to capture sensor data associated with the environment. For example, the sensor data may be processed by one or more vehicle computing devices110or other processing system to identify and/or classify data associated with objects in the environment100, such as the additional vehicles108. In addition to identifying and/or classifying the data associated with the additional vehicles108, the vehicle computing device(s)110may also identify and/or classify additional objects, e.g., trees, vehicles, pedestrians, buildings, road surfaces, signage, barriers, road markings, or the like. In specific implementations of this disclosure, the sensor data may be processed by the vehicle computing device(s)110to identify portions of the data that are associated with an articulated object, such as an articulated vehicle. The vehicle computing device(s)110may include a planning component (e.g., the planning component426), which may generally be configured to generate a drive path and/or one or more trajectories along which the vehicle102is to navigate in the environment100, e.g., relative to the additional vehicles108and/or other objects. In some examples, the planning component and/or some other portion of the vehicle computing device(s)110may generate representations of objects in the environment, including the additional vehicles108. For instance,FIG.1illustrates a first object representation114(1) and a second object representation114(2) associated with the first additional vehicle108(1), a third object representation114(3) associated with the second additional vehicle108(2), and a fourth object representation114(4) associated with the third additional vehicle108(3) (collectively, the first object representation114(1), the second object representation114(2), the third object representation114(3), and the fourth object representation114(4) may be referred to as the representations114). In examples, the representations114may be two-dimensional polygons that approximate the extents of the respective additional vehicles108(or portions thereof). In the top-down illustration ofFIG.1, each of the representations114is a rectangle, though other shapes are possible. In at least some examples, each of the representations114may be a rectangular bounding box. In some examples, the additional vehicles108may be represented as a single two-dimensional geometric structure, like the object representations114(3) and114(4). In many instances, such representations114are sufficient to model the respective object. In the illustrated embodiment the tractor and trailer portions of the second additional vehicle108(2) are generally aligned, e.g., because the second additional vehicle108(2) is traveling generally straight in the first lane106(1). In other examples, the third representation114(3), may adequately represent the additional vehicle108(2), e.g., because, even when the first additional vehicle108(2) moves, the overall extents of the additional vehicle e.g., the overall footprint of vehicle, may vary only slightly. However, generating a single representation or bounding box for each object may be suboptimal if the second additional vehicle108(2) intends to turn into the fifth lane106(5), as the second additional vehicle108(2) navigates that turn, the third object representation114(3) may be altered such as to include an overinclusive area of the environment100. In some instances, improper, e.g., overinclusive, representations can be problematic for comfortable and/or safe travel of the vehicle102. In such an example, the vehicle computing device(s)110may perceive the second additional vehicle108(2) as likely to impede travel of the vehicle102and/or as an object with which the vehicle102may potentially collide such as by entering the lane106(2). Accordingly, by representing the second additional vehicle108(2) using a single, overinclusive representation like the third representation114(3), the planning component may control the vehicle to perform an evasive maneuver, such as swerving, slowing down, and/or stopping the vehicle102to avoid the third object representation114(3), despite the fact that the third additional vehicle108(3) is in no way impeding or a threat to impede travel of the vehicle102. The additional vehicles108may also, or instead, be represented as multiple two-dimensional geometric structures, like the first object representation114(1) and the second object representation114(2). As illustrated, due to articulation of the first additional vehicle108(1), the first object representation114(1) is associated with a first portion (e.g., a tractor portion) and the second object representation114(2) is associated with a second portion (e.g., a trailer portion). In this example, the first additional vehicle108(1) is a tractor-trailer comprising a cab towing a trailer. The cab and trailer are not fixed as a rigid body, but instead, the trailer is attached such that it may pivot relative to the cab. The tractor-trailer represents one type of an articulated vehicle. Other types of articulated vehicles may include, but are not limited to, articulated buses, tow trucks with vehicles in tow, passenger vehicles towing other objects, or the like. Generally, and as used herein, an articulated object may refer to any object having two or more bodies (portions) that are movable relative to each other. Articulated objects may be characterized as having a footprint that changes as a result of articulation of the object. Generally, determining multiple representations for a single object rather than determining a single representation requires the vehicle computing device(s)110to use more computational resources (e.g., memory and/or processor allocation or usage) than determining a single representation, because the vehicle computing device(s)110detects and processes the tractor object and the trailer object as different objects in the environment. Accordingly, representing the additional vehicles108with multiple portions can cause the vehicle computing device(s)110to reduce an amount of available computational resources, which are limited. As also illustrated inFIG.1, the vehicle computing device(s)110include an articulated object modelling component116. The articulated object modelling component116can include functionality, which is implemented, in part, via one or more models. In examples, the articulated object modelling component116may join, define, classify, or otherwise determine that two objects (or the corresponding object representations), such as the tractor and the trailer, are an articulated object in the environment100. For instance, the articulated object modelling component116can apply heuristics and/or mathematical algorithms to sensor data associated with each object detected in the environment100to associate or join the two objects as a single articulated object. By implementing the articulated object modelling component116, object representations for articulated objects may be generated that better represent the footprint of such objects. The articulated object modelling component116can identify an articulated object in a variety of ways. For example, the articulated object modelling component116can determine if two object representations overlap and/or intersect with each other. For instance, the articulated object modelling component116can receive sensor data as input and identify that a portion of the first object representation114(1) and a portion of the second object representation114(2) includes an overlap118. The articulated object modelling component116may also, or instead, determine an intersection point120between the first object representation114(1) and the second object representation114(2). InFIG.1the intersection point120is shown between a midline122of a first object (the tractor) and a midline124of a second object (the trailer), though the intersection point120may also be associated with one or more points of a boundary or edge of an object representation. Based at least in part on the overlap118and/or the intersection point120, the articulated object modelling component116can define an articulated object as encompassing both the first object representation114(1) and the second object representation114(2). In various examples, the articulated object modelling component116can define an articulated object based at least in part on a sized of a detected object. For example, the articulated modelling component116may compare the size (e.g., length, width, area, volume, or the like) of a detected object, to a size threshold. For instance, an object representation that meets or exceeds the size threshold can be combined with another adjacent, intersecting, and/or overlapping object representation. The articulated object modelling component116can also, or instead, determine a distance between a point of the first object representation114(1) and another point of the second object representation114(2), and determine that the respective objects are joined based on the distance being less than a distance threshold, for example. Additional details for determining articulated objects can be found throughout this disclosure including inFIG.2and the description accompanying that figure. In various examples, an output by the articulated object modelling component116identifying an articulated object can be used by other models and components of the vehicle computing device(s)110such as a different motion model (e.g., an articulated object motion model126) that tracks movement of the articulated object over time. By dedicating a model to track movement based on the unique characteristics of an articulated object, determinations by the motion model can efficiently make use of available computational resources (e.g., memory and/or processor allocation or usage) while also improving accuracy of predictions. That is, the motion model can determine future states of the articulated object in less time and with more accuracy than a model that treats the portions of the articulated object as separate objects while also utilizing fewer processor and/or memory resources. In some examples, the functionality of the articulated object modelling component116and the articulated object motion model126can be combined into a single model and/or component. Upon the articulated object modelling component116determining the presence of an articulated object, the vehicle computing device(s)110can implement one or more additional models to track motion of the articulated object (e.g., the first additional vehicle108(1)). In some examples, the articulated object motion model126can identify future states of the first object representation114(1) and the second object representation114(2) based on a current state of one of the object representations (e.g., such as the front portion that directs travel of the rear portion). For example, the articulated object motion model126can predict future states of the first additional vehicle108(1) in the environment100(e.g., predict a position, a velocity, and/or an orientation, etc. of the articulated object at a future time). The articulated object motion model126may, for example, receive object state data associated with the articulated object at a first time, apply one or more filtering algorithms to representative portions of the articulated object, and output updated state data for the articulated object at a second time in the future. For example, the articulated object motion model126may output predicted states of a tractor (e.g., a first portion) and a trailer (e.g., a second portion) in the future based at least in part on filtering techniques that identify mathematical relationships between the portions (e.g., a front portion and a rear portion relative to a direction of travel) of the articulated object. Additional details for determining motion of articulated objects can be found throughout this disclosure including inFIG.3and the description accompanying that figure. Although the first object representation114(1) and the second object representation114(2) are shown in the example environment100as rectangles, other geometric shapes may be used for one or more of the object representations114. For instance, the sensor data may be processed by the vehicle computing device to output a top-down illustration of the environment100in two-dimensions or a bird's eye view in three dimensions. Thus, regardless of the shape of the object representations114, the articulated object modelling component116can determine when two object representations intersect and/or overlap. Additional examples of determining object state data and vehicle state data based on sensor data can be found in U.S. patent application Ser. No. 16/151,607, filed on Oct. 4, 2018, entitled “Trajectory Prediction on Top-Down Scenes,” which is incorporated herein by reference in its entirety and for all purposes. Additional examples of tracking objects can be found in U.S. patent application Ser. No. 16/147,328, filed on Sep. 28, 2018, entitled “Image Embedding for Object Matching,” which is incorporated herein by reference in its entirety and for all purposes. Additional examples of selecting bounding boxes can be found in U.S. patent application Ser. No. 16/201,842, filed on Nov. 27, 2018, entitled “Bounding Box Selection,” which is incorporated herein by reference in its entirety and for all purposes. Additional examples of determining whether objects are related as an articulated object can be found in U.S. patent application Ser. No. 16/586,455, filed on Sep. 27, 2019, entitled “Modeling Articulated Objects,” which is incorporated herein by reference in its entirety and for all purposes. Additional examples of tracking articulated objects over time can be found in U.S. patent application Ser. No. 16/804,717, filed on Oct. 4, 2018, entitled “Tracking Articulated Objects,” which is incorporated herein by reference in its entirety and for all purposes. FIG.2is an illustration of another example environment200in which one or more models determine presence of an articulated object. For instance, a computing device202can implement the articulated object modelling component116to associate or j oin two or more objects as a single articulated object with portions that move relative to each other. In some examples, the computing device202may be associated with vehicle computing device(s)404and/or computing device(s)436. In various examples, the articulated object modelling component116(also referred to as “the model”) receives input data204and generates output data206representing a classification of two objects (e.g., a first object208and a second object210) as an articulated object. The input data204can include one or more of: sensor data, map data, simulation data, and/or top-down representation data, and so on. Sensor data can include points212to represent an object and/or other features of the environment100. The points212can be associated with sensor data from a LIDAR sensor, a RADAR sensor, a camera, and/or other sensor modality. The input data204can also, or instead, include a classification of an object as an object type (e.g., car, truck, tractor, trailer, boat, camper, pedestrian, cyclist, animal, tree, road surface, curb, sidewalk, lamppost, signpost, unknown, etc.). In some examples, the points212can be used to determine the first object representation214and the second object representation216while in other examples, the first object representation214and the second object representation216may be received as the input data204from another model. The points212may also be used to identify an articulated object. In one specific example, the first object208having an object type of a tractor and the second object210classified as a trailer may be depicted as a first object representation214and a second object representation216(e.g., rectangular bounding boxes) that substantially encompass the length and width of the respective object. As noted above, the points212may be generated by one or more sensors on an autonomous vehicle (the vehicle102) and/or may be derived from sensor data captured by one or more sensors on and/or remote from an autonomous vehicle. In some examples, the points212may be grouped as a plurality of points associated with a single object while in other examples the points212may be associated with multiple objects. In at least some examples, the points212may include segmentation information, which may associate each of the points212with the first object representation214or the second object representation216. Although the points212include points forming (or outlining) a generally continuous contour, in other examples, sensors may provide data about fewer than all sides. In some examples, the points212may be estimated for hidden or occluded surfaces based on known shapes and sizes of objects. In some examples, the articulated object modelling component116can join two objects in the environment200based on one or more heuristics and/or algorithms that identify a relationship between the objects and/or object types. In such examples, the model can determine to join the first object208and the second object210based on a size, an intersection, and/or an overlap of the first object representation214and the second object representation216. For instance, the model may apply a physical heuristic, a physics algorithm, and/or a mathematical algorithm (e.g., linear algebra) to identify an articulated object based at least in part on at least one of the object representations (or a combination thereof) being larger than a threshold size, a distance between the object representations being within a threshold distance, an intersection point of the object representations, and/or an overlap of the object representations. Examples of physical heuristic, a physics algorithm, and/or a mathematical algorithm can include one or more of: a length heuristic (e.g., an object over a certain length such as when the object is in a straight line), a joining heuristic (e.g., an object center point is joinable with another object center point), a motion equation, a dynamics algorithm, a kinematics algorithm, a size heuristic, a distance heuristic, an intersection point algorithm, and/or an algorithm that determines an intersection and/or a distance between centerlines of two objects, just to name a few. In one specific example, the articulated object modelling component116can classify two objects in the environment200as an articulated object based on a size heuristic (e.g., one of the two objects is above a size threshold), a distance heuristic (e.g., a distance between points or midlines of the two objects), and/or a joining point heuristic (adjoining center points of the two objects are within a threshold distance of each other). In some examples, the size heuristic can include the model116determining a maximum allowable length of a single vehicle (e.g., a State law that limits an overall length of the single vehicle), and determining the articulated object based on the length of an object being over the maximum allowable length (e.g., an object over 40 feet is associated with another object as the articulated object because the single vehicle is limited to 40 feet). Thus, the model116can employ the size heuristic to identify a recreational vehicle, truck, and/or tractor that is towing a boat, another vehicle, or a trailer. The articulated object modelling component116can also, or instead, join two objects as the articulated object based at least in part on comparing data from different sensor modalities. If data from two sensor modalities are both associated with a same object type (a LIDAR sensor and a camera sensor both “see” a tractor portion or a trailer portion of a semi-truck), the model can combine two objects as the articulated object. For example, the model can compare LIDAR data representing an object with camera data to determine if the object represented by the LIDAR data is a same object represented by the camera data (e.g., does a camera sensor detect a same object as the LIDAR sensor). By way of example and not limitation, the LIDAR data can be associated with a vehicle such as a truck, and the one or more camera sensors can verify if the truck exists. In examples when the camera data represents a same object as the LIDAR data, the model116can determine presence of the articulated object based on data from both sensor modalities. In examples when the camera data does not represent the same object as the LIDAR data, the model116can determine presence of the articulated object based on the camera data. The articulated object modelling component116can, in some examples, determine a first size of the first object representation214and a second size of the second object representation216, and compare the first size or the second size to a size threshold. For instance, when a length, a width, and/or an area of an object representation meets or exceeds a threshold length, width, area, the model (or the component or the system) joins the object representation with an overlapping or adjacent object to define an articulated object220. In some examples, only one of the two sizes of the object representations need to meet or exceed the threshold size to join two objects. In other examples, a combined size of both object representations can be compared to the size threshold, and based on the comparison, the objects can be joined as the articulated object220(the size meets or exceeds the size threshold) or the objects cannot be joined (the size is less than the size threshold). The articulated object modelling component116may also, or instead, identify, classify, or otherwise determine an articulated object based at least in part on a distance between two points (e.g., a point associated with a midline, a center, a boundary, etc.) associated with each respective object. For example, the model can determine a distance between one or more points of the first object representation214and one or more points associated with the second object representation216and join the first object208and the second object210as the articulated object220based at least in part on to a comparison of the distance to a distance threshold. The distance may be between points associated with a midline or a boundary, just to name a few. For instance, a distance between a point associated with a midline, a center, and/or a boundary of the first object representation214and another point associated with a midline, a center, and/or a boundary of the second object representation216may be compared to a distance threshold to determine that the first object representation214and the second object representation216the articulated object220. In examples when the distance between two boundary points of two object representations is equal to or less than a 1 meter distance threshold, the articulated object modelling component116can output a classification that the objects are joined as the articulated object220. In some examples, the distance between one or more points of the first object representation214and one or more points associated with the second object representation216can include a distance222between the intersection point218of the first object representation214and point(s) at a boundary of the first object representation214and/or a boundary of the second object representation216. Generally, the distance222can represent a maximum extent of the first object representation214and/or the second object representation216. In some examples, the articulated object motion model126may track motion of the articulated object220over time including determining changes in a position of the first object representation214relative to the second object representation216. For instance, the model may determine a joint intersection between the first object representation214and the second object representation216in a two-dimensional (e.g., x-y) coordinate system using the following equations. [x0y0]+α[Cθ0Sθ0]=[x1y1]+β[Cθ1Sθ1](1)[αβ]=1sθ0-θ1[-Sθ1Cθ1-Sθ0Cθ0][x0-x1y0-y1](2) {tilde over (E)}x0=Ex0×0.5+α+δ (3) {tilde over (E)}x1=Ex1×0.5+β+δ (4) where C=cosine, S=Sine, θ=object state such as a yaw value, δ=distance222, α=distance from a center point to an end point of a first object, and β=distance from a center point to an end point of a second object. Equation (1) can represent an intersection point between two objects while equation (2) is a rearranged form of equation (1). Equations (3) and (4) output representations of the first object and the second object (e.g., the first object representation214and the second object representation216). In various examples, the articulated object modelling component116can determine the articulated object220based on determining that two or more object representations intersect and/or overlap. For instance, the first object representation214may have a point (e.g., a midline point, a center point, an edge point) that intersects and/or overlaps with a corresponding point of the second object representation216. In one specific example, the first object representation214may have a midline that intersects with another midline of the second object representation216. The model can output a classification that the first object208and the second object210represent the articulated object220based at least in part on determining that points of the object representations intersect and/or that at least some portions of each object representations overlap. The articulated object modelling component116may also, or instead, identify, classify, or otherwise determine an articulated object based at least in part on a control policy associated with the input data204. For instance, the computing device can identify behaviors of the first object and the second object over time (based on sensor data, map data, and so on), and apply a control policy, such as a right of way or a rule at an intersection to join the first object and the second object in the environment. By way of example and not limitation, the articulated object modelling component116can identify, detect, or otherwise determine that two object representations proceed simultaneously from a stop sign, a green light, and so on. The articulated object modelling component116can, in some examples, receive sensor data over time and adjust, update, or otherwise determine a relationship between portions of the articulated object. For instance, the model116can disjoin, or reclassify, an articulated object as two separate objects based on the sensor data indicating the portions (or object representations) are no longer related (e.g., the portions became detached due to an accident or were erroneously determined to be an articulated object at an earlier time, etc.). That is, the model116can, based at least in part on a change in the relationship, update a classification of the first object and the second object (or additional objects making up the articulated object). In such examples, the relationship may be indicative of a covariant relationship between points of respective object representations. In some examples, the model116can define the covariant relationship to include covariance between a distance, a yaw, a velocity, and so on associated with different object representations. FIG.3is an illustration of another example environment300in which one or more models determine potential states of an articulated object at a future time. For instance, the computing device202can implement the articulated object motion model126to predict future states of the articulated object220. In some examples, the computing device202may be associated with vehicle computing device(s)404and/or computing device(s)436. In various examples, the articulated object motion model126receives input data302(e.g., object state data, sensor data, map data, simulation data, etc.) from one or more models and/or components, and generates output data304representing articulated object state data at time(s) in the future. The input data302can include object state data state data (e.g., one or more of: position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw rate data, or turning rate data, just to name a few) associated with the first object208and/or the second object210. Generally, the articulated object motion model126can predict a change in position, heading, yaw, velocity, acceleration, and/or the like for the articulated object220over time based at least in part on the input data302. The articulated object motion model126may, using one or more algorithms, define a relationship (e.g., a covariant relationship) between points and/or states of a first object and points and/or states of a second object of the articulated object. In this way, state data associated with the first object can be used to predict state data associated with the second object. For example, the model126can use state data associated with a tractor or a trailer to predict state data associated with the other of the tractor of the trailer. In some examples, the model126receive sensor data over time and adjust and/or update the relationship between portions (e.g., object representations) of the articulated object. The articulated object motion model126may generate sets of estimated states of the vehicle102and one or more detected articulated objects forward in the environment300over a time period. The articulated object motion model126may generate a set of estimated states for each action (e.g., reference action and/or sub-action of an object and/or the vehicle) applicable to the environment. The sets of estimated states may include one or more estimated states, each estimated state including an estimated position of the vehicle and an estimated position of the articulated object220. In some examples, the estimated states may include estimated positions of the articulated object220at an initial time (T=0) (e.g., current time). The model126may determine the estimated positions based on a detected trajectory and/or predicted trajectories associated with the articulated object220. In some examples, the model126can determine the estimated positions based on an assumption of substantially constant velocity and/or substantially constant trajectory (e.g., little to no lateral movement of the object). In some examples, the estimated positions (and/or potential trajectories) may be based on passive and/or active prediction. In some examples, the articulated object motion model126may utilize physics and/or geometry-based techniques, machine learning, linear temporal logic, tree search methods, heat maps, and/or other techniques for determining predicted trajectories and/or estimated positions of articulated objects. In various examples, the estimated states may be generated periodically throughout the time period. For example, the articulated object motion model126may generate estimated states at 0.1 second intervals throughout the time period. For another example, the articulated object motion model126may generate estimated states at 0.05 second intervals. The estimated states may be used by the planning component426in determining an action for the vehicle402to take in an environment (e.g., determining a planned trajectory such as trajectory306). In some examples, the articulated object motion model126may generate a vehicle representation308for time T1(and optionally other times) to represent an estimated state of the vehicle102at different times in the future. In various examples, the articulated object motion model126may utilize filtering techniques to predict future states of one or more articulated objects. In such examples, the filtering algorithms may determine a covariance and/or a mean between points of the first object representation214and the second object representation216as updated articulated state data (position, velocity, acceleration, trajectory, etc.) at a future time. For example, the articulated object motion model126can apply a filter algorithm (e.g., a Kalman filter) to object state data associated with the first object208and/or the second object210, and determine future states of both portions (or representations) of the articulated object220. In this way, the articulated object motion model126can predict future states for both portions of the articulated object more accurately and in less time versus predicting state data for both portions separately and without consideration to the portions being joined as an articulated object. The articulated object motion model126can be thought of as a “joined motion model” since it predicts motion of all portions of an articulated object (a front portion and additional connected rear portion(s)). For example, a first portion may direct motion of the second portion in the future (e.g., movement by a tractor directs movement of the one or more trailers). By determining that the two portions are connected as an articulated object, the articulated object motion model126can quickly predict a future position, a future velocity, and the like of the second portion based on data (a current orientation, a current velocity, etc.) associated with the first portion. Thus, the articulated object motion model126can output predictions (e.g., a trajectory, a position, a yaw, etc.) associated with an articulated object in less time and with more accuracy versus predicting all possible states for both portions of the articulated object separately. For example, the articulated object motion model126can output an articulated object representation310for time T1and an articulated object representation312for time T2associated with the first additional vehicle108(1). The articulated object motion model126can also, or instead, output an articulated object representation314for time T1and/or an articulated object representation316for time T1associated with the additional vehicle108(2). The articulated object motion model126can generate two or more object representations for a same time to represent possible actions the additional vehicle108(2) may take at a future time. In this way, the articulated object motion model126can determine a predicted position of the additional vehicle108(2) based on road conditions (e.g., straight or right turn as shown inFIG.3). The articulated object representations310,312,314, and/or316can be used by the computing device to perform a simulation involving the vehicle102(e.g., using one or more vehicle representations, such as the vehicle representation308). In various examples, the simulation can account for a reference action taken by the vehicle102and/or the additional vehicle108(2) at a future time, and a sub-action by the vehicle102and/or the additional vehicle108(2) responsive to the reference action. In one specific example, the articulated object motion model126can employ an extended Kalman filter or an unscented Kalman filter to calculate, generate, or otherwise determine predicted states of all portions (the truck and the trailer) of the articulated object. By employing one or more filters as described herein, predicting a future location, velocity, or trajectory of the articulated object can be performed using fewer processor and/or memory resources that models that do not identify a relationship between two objects or portions. In some examples, the articulated object motion model126can employ a Kalman filter in which a decomposition algorithm and/or a ranking algorithm is substituted for another algorithm to “speed up” calculations based on the Kalman filter. For example, the model126can utilize a modified unscented Kalman filter that determines a covariance from sigma points such that the computational resources can determine a prediction in less time versus using typical square root unscented Kalman filters. The modified Kalman filter can include substituting operations of a QR decomposition and a Cholesky rank one downdate (which relies on performing a matrix calculation) with “2N rank one updates” and “one rank one downdate” operations to reduce processing latency. In this way, the modified Kalman filter can utilize covariance symmetry by employing “2N rank one symmetric updates” and “one rank one symmetric downdate”. In some examples, the model126can selectively employ the modified Kalman filter to remove processing of Jacobian matrices to improve an overall processing speed at which the model126can determine predictions. Thus, the modified Kalman filter can represent a mathematical enhancement to a Kalman filter that relies on derivative calculations. In some examples, the articulated object motion model126can determine, as a correlation, a characteristic (e.g., a first velocity, a first position, etc.) of the first portion and a characteristic (e.g., a second velocity, a second position, etc.) of the second portion. In such examples, the model can generate output data representing a predicted state of the first portion and the second portion based at least in part on the correlation. The articulated object motion model126is configured to determine an offset value between two object representations and predict future states for one or both of the portions of the articulated object based at least in part on the offset value. For example, the articulated object motion model126can receive state data of a first object representation (e.g., velocity of a truck), and predict a future velocity of the second representation (e.g., the trailer) based on the offset value. In some examples, the model can apply linear and/or non-linear algorithms to determine a covariance and/or a mean between one or more points of the first object representation114(1) and one or more points of the second object representation114(2). The articulated object motion model126may, in some examples, determine a covariance between sampled points associated with each object representation, and use the covariance to determine the output data304(e.g., predicted states of both portions of the articulated object). In various examples, the articulated object motion model126determines the offset value to represent a displacement or difference in a position (e.g., x, y, z in a three-dimensional coordinate system), a heading, a yaw, a velocity, an acceleration, etc. between two or more object representations making up an articulated object. The articulated object motion model126can generate the output data304in less time and with more accuracy based on the offset value without consideration to the infinite potential positions, headings, etc. considered by a model that does not determine an offset value. The articulated object motion model126can also, or instead, determine the output data304by employing linear algebra and other mathematical algorithms that do not rely on derivative calculations (or Jacobian matrices) thereby reducing an amount of time required to process the input data302. By implementing the articulated object motion model126, processing resources to determine precited states of articulated objects can be generated in less time versus not using the model which provides more computational resources to the computing device202for other processing (e.g., process additional objects in the environment), which improves safety of the vehicle102. In some examples, the articulated object motion model126can determine future states of an articulated object up to four times faster than conventional models that do not consider relationships of portions making up articulated objects. FIG.4illustrates a block diagram of an example system400for implementing the techniques described herein. In at least one example, the system400can include a vehicle402, which can be the same vehicle as the vehicle102described above with reference toFIG.1. The vehicle402may include a vehicle computing device404, one or more sensor systems406, one or more emitters408, one or more communication connections410, at least one direct connection412, and one or more drive system(s)414. The vehicle computing device404may include one or more processors416and memory418communicatively coupled with the one or more processors416. In the illustrated example, the vehicle402is an autonomous vehicle; however, the vehicle402could be any other type of vehicle, such as a semi-autonomous vehicle, or any other system having at least an image capture device (e.g., a camera enabled smartphone). In some instances, the autonomous vehicle402may be an autonomous vehicle configured to operate according to a Level5classification issued by the U.S. National Highway Traffic Safety Administration, which describes a vehicle capable of performing all safety-critical functions for the entire trip, with the driver (or occupant) not being expected to control the vehicle at any time. However, in other examples, the autonomous vehicle402may be a fully or partially autonomous vehicle having any other level or classification. In various examples, the vehicle computing device404may store sensor data associated with actual location of an object at the end of the set of estimated states (e.g., end of the period of time) and may use this data as training data to train one or more models. In some examples, the vehicle computing device404may provide the data to a remote computing device (i.e., computing device separate from vehicle computing device such as the computing device(s)436) for data analysis. In such examples, the remote computing device(s) may analyze the sensor data to determine an actual location, velocity, direction of travel, or the like of the object at the end of the set of estimated states. Additional details of training a machine learned model based on stored sensor data by minimizing differences between actual and predicted positions and/or predicted trajectories is described in U.S. patent application Ser. No. 16/282,201, filed on Mar. 12, 2019, entitled “Motion Prediction Based on Appearance,” which is incorporated herein by reference for all purposes. In the illustrated example, the memory418of the vehicle computing device404stores a localization component420, a perception component422, a prediction component424, a planning component426, one or more system controllers428, one or more maps430, and a model component432including one or more model(s), such as a first model434A, a second model434B, up to an Nth model434N (collectively “models434”), where N is an integer. Though depicted inFIG.4as residing in the memory418for illustrative purposes, it is contemplated that the localization component420, the perception component422, the prediction component424, the planning component426, one or more system controllers428, one or more maps430, and/or the model component432including the model(s)434may additionally, or alternatively, be accessible to the vehicle402(e.g., stored on, or otherwise accessible by, memory remote from the vehicle402, such as, for example, on memory440of a remote computing device436). In at least one example, the localization component420may receive data from the sensor system(s)406to determine a position and/or orientation of the vehicle402(e.g., one or more of an x-, y-, z-position, roll, pitch, or yaw). For example, the localization component420may include and/or request/receive a map of an environment, such as from map(s)430and/or map component446, and may continuously determine a location and/or orientation of the autonomous vehicle within the map. In some instances, the localization component420may utilize SLAM (simultaneous localization and mapping), CLAMS (calibration, localization and mapping, simultaneously), relative SLAM, bundle adjustment, non-linear least squares optimization, or the like to receive image data, LIDAR data, RADAR data, IMU data, GPS data, wheel encoder data, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component420may provide data to various components of the vehicle402to determine an initial position of an autonomous vehicle for determining the relevance of an object to the vehicle402, as discussed herein. In some instances, the perception component422may perform object detection, segmentation, and/or classification. In some examples, the perception component422may provide processed sensor data that indicates a presence of an object (e.g., entity) that is proximate to the vehicle402and/or a classification of the object as an object type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, unknown, etc.). In some examples, the perception component422may provide processed sensor data that indicates a presence of a stationary entity that is proximate to the vehicle402and/or a classification of the stationary entity as a type (e.g., building, tree, road surface, curb, sidewalk, unknown, etc.). In additional or alternative examples, the perception component422may provide processed sensor data that indicates one or more features associated with a detected object (e.g., a tracked object) and/or the environment in which the object is positioned. In implementations, the perception component422can specifically identify articulated objects, such as articulated vehicles. In some examples, features associated with an object may include, but are not limited to, an x-position (global and/or local position), a y-position (global and/or local position), a z-position (global and/or local position), an orientation (e.g., a roll, pitch, yaw), an object type (e.g., a classification), a velocity of the object, an acceleration of the object, an extent of the object (size), etc. Features associated with the environment may include, but are not limited to, a presence of another object in the environment, a state of another object in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc. The prediction component424can generate one or more probability maps representing prediction probabilities of possible locations of one or more objects in an environment. For example, the prediction component424can generate one or more probability maps for articulated objects, vehicles, pedestrians, animals, and the like within a threshold distance from the vehicle402. In some instances, the prediction component424can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior. In some instances, the one or more probability maps can represent an intent of the one or more objects in the environment. In some examples, the prediction component424may generate predicted trajectories of objects (e.g., articulated objects) in an environment and/or to generate predicted candidate trajectories for the vehicle402. For example, the prediction component424may generate one or more predicted trajectories for objects within a threshold distance from the vehicle402. In some examples, the prediction component424may measure a trace of an object and generate a trajectory for the object based on observed and predicted behavior. In general, the planning component426may determine a path for the vehicle402to follow to traverse through an environment. For example, the planning component426may determine various routes and trajectories and various levels of detail. For example, the planning component426may determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For the purpose of this discussion, a route may include a sequence of waypoints for travelling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc. Further, the planning component426may generate an instruction for guiding the autonomous vehicle along at least a portion of the route from the first location to the second location. In at least one example, the planning component426may determine how to guide the autonomous vehicle from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction may be a candidate trajectory, or a portion of a trajectory. In some examples, multiple trajectories may be substantially simultaneously generated (e.g., within technical tolerances) in accordance with a receding horizon technique. A single path of the multiple paths in a receding data horizon having the highest confidence level may be selected to operate the vehicle. In various examples, the planning component426can select a trajectory for the vehicle402based at least in part on receiving data representing an output of the model component432. In other examples, the planning component426can alternatively, or additionally, use data from the localization component420, the perception component422, and/or the prediction component424to determine a path for the vehicle402to follow to traverse through an environment. For example, the planning component426can receive data from the localization component420, the perception component422, and/or the prediction component424regarding objects associated with an environment. Using this data, the planning component426can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location) to avoid objects in an environment. In at least some examples, such a planning component426may determine there is no such collision free path and, in turn, provide a path which brings vehicle402to a safe stop avoiding all collisions and/or otherwise mitigating damage. Additionally or alternatively, the planning component426can determine the path for the vehicle402to follow based at least in part on data received from the articulated object modelling component116and/or the articulated object motion model126as described inFIGS.1-3and elsewhere. In at least one example, the vehicle computing device404may include one or more system controllers428, which may be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle402. The system controller(s)428may communicate with and/or control corresponding systems of the drive system(s)414and/or other components of the vehicle402. The memory418may further include one or more maps430that may be used by the vehicle402to navigate within the environment. For the purpose of this discussion, a map may be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. In some instances, a map may include, but is not limited to: texture information (e.g., color information (e.g., RGB color information, Lab color information, HSV/HSL color information), and the like), intensity information (e.g., LIDAR information, RADAR information, and the like); spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)), reflectivity information (e.g., specularity information, retroreflectivity information, BRDF information, BSSRDF information, and the like). In one example, a map may include a three-dimensional mesh of the environment. In some examples, the vehicle402may be controlled based at least in part on the map(s)430. That is, the map(s)430may be used in connection with the localization component420, the perception component422, the prediction component424, and/or the planning component426to determine a location of the vehicle402, detect objects in an environment, generate routes, determine actions and/or trajectories to navigate within an environment. In some examples, the one or more maps430may be stored on a remote computing device(s) (such as the computing device(s)436) accessible via network(s)442. In some examples, multiple maps430may be stored based on, for example, a characteristic (e.g., type of entity, time of day, day of week, season of the year, etc.). Storing multiple maps430may have similar memory requirements, but increase the speed at which data in a map may be accessed. As illustrated inFIG.4, the vehicle computing device404may include a model component432. The model component432may be configured to perform the functionality of the articulated object modelling component116and/or the articulated object motion model126, including predicting presence and/or motion of articulated objects, such as with the additional vehicles108(1) and108(2), and the articulated object220. In various examples, the model component432may receive one or more features associated with the detected object(s) from the perception component422and/or from the sensor system(s)406. For instance, the articulated object modelling component116can receive data, e.g., sensor data, associated with two or more objects and determine presence of an articulated object in an environment. In some examples, the model component432may receive environment characteristics (e.g., environmental factors, etc.) and/or weather characteristics (e.g., weather factors such as snow, rain, ice, etc.) from the perception component422and/or the sensor system(s)406. While shown separately inFIG.4, the model component432could be part of the prediction component424, the planning component426, or other component(s) of the vehicle402. In various examples, the model component432may send predictions from the one or more models434that may be used by the prediction component424and/or the planning component426to generate one or more predicted trajectories of the object (e.g., direction of travel, speed, etc.) and/or one or more predicted trajectories of the object (e.g., direction of travel, speed, etc.), such as from the prediction component thereof. In some examples, the planning component426may determine one or more actions (e.g., reference actions and/or sub-actions) for the vehicle402, such as vehicle candidate trajectories. In some examples, the model component432may be configured to determine whether an articulated object intersects with the vehicle402based at least in part on the one or more actions for the vehicle402. In some examples, the model component432may be configured to determine the actions that are applicable to the environment, such as based on environment characteristics, weather characteristics, or the like. The model component432may generate sets of estimated states of the vehicle and one or more detected objects forward in the environment over a time period. The model component432may generate a set of estimated states for each action (e.g., reference action and/or sub-action) applicable to the environment. The sets of estimated states may include one or more estimated states, each estimated state including an estimated position of the vehicle and an estimated position of a detected object(s). In some examples, the estimated states may include estimated positions of the detected objects at an initial time (T=0) (e.g., current time). The model component432may determine the estimated positions based on a detected trajectory and/or predicted trajectories associated with the object. In some examples, determining the estimated positions may be based on an assumption of substantially constant velocity and/or substantially constant trajectory (e.g., little to no lateral movement of the object). In some examples, the estimated positions (and/or potential trajectories) may be based on passive and/or active prediction. In some examples, the model component432may utilize physics and/or geometry based techniques, machine learning, linear temporal logic, tree search methods, heat maps, and/or other techniques for determining predicted trajectories and/or estimated positions of objects. In various examples, the estimated states may be generated periodically throughout the time period. For example, the model component432may generate estimated states at 0.1 second intervals throughout the time period. For another example, the model component432may generate estimated states at 0.05 second intervals. The estimated states may be used by the planning component426in determining an action for the vehicle402to take in an environment. In various examples, the model component432may utilize machine learned techniques to predict risks associated with evaluated trajectories. In such examples, the machine learned algorithms may be trained to determine, based on sensor data and/or previous predictions by the model, that an object is likely to behave in a particular way relative to the vehicle402at a particular time during a set of estimated states (e.g., time period). In such examples, one or more of the vehicle402state (position, velocity, acceleration, trajectory, etc.) and/or the articulated object state, classification, etc. may be input into such a machine learned model and, in turn, a behavior prediction may be output by the model. In various examples, characteristics associated with each object type may be used by the model component432to determine an object velocity or acceleration for predicting potential intersection(s) between objects and/or between the vehicle402and one or more objects. Examples of characteristics of an object type may include, but not be limited to: a maximum longitudinal acceleration, a maximum lateral acceleration, a maximum vertical acceleration, a maximum speed, maximum change in direction for a given speed, and the like. As can be understood, the components discussed herein (e.g., the localization component420, the perception component422, the prediction component424, the planning component426, the one or more system controllers428, the one or more maps430, the model component432including the model(s)434are described as divided for illustrative purposes. However, the operations performed by the various components may be combined or performed in any other component. While examples are given in which the techniques described herein are implemented by a planning component and/or a model component of the vehicle, in some examples, some or all of the techniques described herein could be implemented by another system of the vehicle, such as a secondary safety system. Generally, such an architecture can include a first computing device to control the vehicle402and a secondary safety system that operates on the vehicle402to validate operation of the primary system and to control the vehicle402to avoid collisions. In some instances, aspects of some or all of the components discussed herein may include any models, techniques, and/or machine learned techniques. For example, in some instances, the components in the memory418(and the memory440, discussed below) may be implemented as a neural network. As described herein, an exemplary neural network is a technique which passes input data through a series of connected layers to produce an output. Each layer in a neural network may also comprise another neural network, or may comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network may utilize machine learning, which may refer to a broad class of such techniques in which an output is generated based on learned parameters. Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning techniques may include, but are not limited to, regression techniques (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based techniques (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree techniques (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAD), decision stump, conditional decision trees), Bayesian techniques (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering techniques (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning techniques (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning techniques (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Techniques (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Techniques (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet50, ResNet101, VGG, DenseNet, PointNet, and the like. In at least one example, the sensor system(s)406may include LIDAR sensors, RADAR sensors, ultrasonic transducers, sonar sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units (IMUs), accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, time of flight, etc.), microphones, wheel encoders, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. The sensor system(s)406may include multiple instances of each of these or other types of sensors. For instance, the LIDAR sensors may include individual LIDAR sensors located at the corners, front, back, sides, and/or top of the vehicle402. As another example, the camera sensors may include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle402. The sensor system(s)406may provide input to the vehicle computing device404. Additionally, or in the alternative, the sensor system(s)406may send sensor data, via the one or more networks442, to the one or more computing device(s)436at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. The vehicle402may also include one or more emitters408for emitting light and/or sound. The emitter(s)408may include interior audio and visual emitters to communicate with passengers of the vehicle402. By way of example and not limitation, interior emitters may include speakers, lights, signs, display screens, touch screens, haptic emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners, etc.), and the like. The emitter(s)408may also include exterior emitters. By way of example and not limitation, the exterior emitters may include lights to signal a direction of travel or other indicator of vehicle action (e.g., indicator lights, signs, light arrays, etc.), and one or more audio emitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians or other nearby vehicles, one or more of which comprising acoustic beam steering technology. The vehicle402may also include one or more communication connections410that enable communication between the vehicle402and one or more other local or remote computing device(s). For instance, the communication connection(s)410may facilitate communication with other local computing device(s) on the vehicle402and/or the drive system(s)414. Also, the communication connection(s)410may allow the vehicle to communicate with other nearby computing device(s) (e.g., remote computing device436, other nearby vehicles, etc.) and/or one or more remote sensor system(s)444for receiving sensor data. The communications connection(s)410also enable the vehicle402to communicate with a remote teleoperations computing device or other remote services. The communications connection(s)410may include physical and/or logical interfaces for connecting the vehicle computing device404to another computing device or a network, such as network(s)442. For example, the communications connection(s)410can enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, cellular communication (e.g., 2G, 3G, 4G, 4G LTE, 5G, etc.) or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s). In at least one example, the vehicle402may include one or more drive systems414. In some examples, the vehicle402may have a single drive system414. In at least one example, if the vehicle402has multiple drive systems414, individual drive systems414may be positioned on opposite ends of the vehicle402(e.g., the front and the rear, etc.). In at least one example, the drive system(s)414may include one or more sensor systems to detect conditions of the drive system(s)414and/or the surroundings of the vehicle402. By way of example and not limitation, the sensor system(s) may include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive modules, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive module, LIDAR sensors, RADAR sensors, etc. Some sensors, such as the wheel encoders may be unique to the drive system(s)414. In some cases, the sensor system(s) on the drive system(s)414may overlap or supplement corresponding systems of the vehicle402(e.g., sensor system(s)406). The drive system(s)414may include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port, etc.). Additionally, the drive system(s)414may include a drive module controller which may receive and preprocess data from the sensor system(s) and to control operation of the various vehicle systems. In some examples, the drive module controller may include one or more processors and memory communicatively coupled with the one or more processors. The memory may store one or more modules to perform various functionalities of the drive system(s)414. Furthermore, the drive system(s)414may also include one or more communication connection(s) that enable communication by the respective drive module with one or more other local or remote computing device(s). In at least one example, the direct connection412may provide a physical interface to couple the one or more drive system(s)414with the body of the vehicle402. For example, the direct connection412may allow the transfer of energy, fluids, air, data, etc. between the drive system(s)414and the vehicle. In some instances, the direct connection412may further releasably secure the drive system(s)414to the body of the vehicle402. In at least one example, the localization component420, the perception component422, the prediction component424, the planning component426, the one or more system controllers428, the one or more maps430, and the model component432, may process sensor data, as described above, and may send their respective outputs, over the one or more network(s)442, to the computing device(s)436. In at least one example, the localization component420, the perception component422, the prediction component424, the planning component426, the one or more system controllers428, the one or more maps430, and the model component432may send their respective outputs to the remote computing device(s)436at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. In some examples, the vehicle402may send sensor data to the computing device(s)436via the network(s)442. In some examples, the vehicle402may receive sensor data from the computing device(s)436and/or remote sensor system(s)444via the network(s)442. The sensor data may include raw sensor data and/or processed sensor data and/or representations of sensor data. In some examples, the sensor data (raw or processed) may be sent and/or received as one or more log files. The computing device(s)436may include processor(s)438and a memory440storing the map component446, a sensor data processing component448, and a training component450. In some examples, the map component446may generate maps of various resolutions. In such examples, the map component446may send one or more maps to the vehicle computing device404for navigational purposes. In various examples, the sensor data processing component448may be configured to receive data from one or more remote sensors, such as sensor system(s)406and/or remote sensor system(s)444. In some examples, the sensor data processing component448may be configured to process the data and send processed sensor data to the vehicle computing device404, such as for use by the model component432(e.g., the model(s)434). In some examples, the sensor data processing component448may be configured to send raw sensor data to the vehicle computing device404. In some instances, the training component450can train a machine learning model to output articulated object trajectories. For example, the training component450can receive sensor data that represents an object traversing through an environment for a period of time, such as 0.1 milliseconds, 1 second, 3, seconds, 5 seconds, 7 seconds, and the like. At least a portion of the sensor data can be used as an input to train the machine learning model. In some instances, the training component450may be executed by the processor(s)438to train the a machine learning model based on training data. The training data may include a wide variety of data, such as sensor data, audio data, image data, map data, inertia data, vehicle state data, historical data (log data), or a combination thereof, that is associated with a value (e.g., a desired classification, inference, prediction, etc.). Such values may generally be referred to as a “ground truth.” To illustrate, the training data may be used for determining risk associated with evaluated trajectories and, as such, may include data representing an environment that is captured by an autonomous vehicle and that is associated with one or more classifications or determinations. In some examples, such a classification may be based on user input (e.g., user input indicating that the data depicts a specific risk) or may be based on the output of another machine learned model. In some examples, such labeled classifications (or more generally, the labeled output associated with training data) may be referred to as ground truth. In some instances, the training component450can train a machine learning model to output classification values. For example, the training component450can receive data that represents labelled collision data (e.g. publicly available data, sensor data, and/or a combination thereof). At least a portion of the data can be used as an input to train the machine learning model. Thus, by providing data where the vehicle traverses an environment, the training component450can be trained to output potential intersection(s) associated with objects, as discussed herein. In some examples, the training component450can include training data that has been generated by a simulator. For example, simulated training data can represent examples where a vehicle collides with an object in an environment or nearly collides with an object in an environment, to provide additional training examples. The processor(s)416of the vehicle402and the processor(s)438of the computing device(s)436may be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s)416and438may comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that may be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices may also be considered processors in so far as they are configured to implement encoded instructions. Memory418and memory440are examples of non-transitory computer-readable media. The memory418and memory440may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein may include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. It should be noted that whileFIG.4is illustrated as a distributed system, in alternative examples, components of the vehicle402may be associated with the computing device(s)436and/or components of the computing device(s)436may be associated with the vehicle402. That is, the vehicle402may perform one or more of the functions associated with the computing device(s)436, and vice versa. FIGS.5and6illustrate example processes in accordance with examples of the disclosure. These processes are illustrated as logical flow graphs, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be omitted and/or combined in any order and/or in parallel to implement the processes. FIG.5is a flowchart illustrating an example process500for determining articulated objects using one or more example models. For example, some or all of the process500can be performed by one or more components inFIG.4, as described herein. For example, some or all of the process500can be performed by the vehicle computing device404or the computing device202. However, the process500is not limited to being performed by these components, and the components are not limited to performing the process500. At operation502, the process500can include receiving sensor data from a sensor associated with a vehicle in an environment. In some examples, the operation502can include a computing device receiving sensor data from the perception component422. The sensor data may be received from one or more sensors on the vehicle and/or from one or more remote sensors. For example, techniques described herein may be useful to detect articulated objects, and the operation502may include receiving a group, blob, or cluster of points (e.g. points212) associated with an articulated object. The points may be generated by one or more sensors, such as a LIDAR sensor, or may be generated from sensor data associated with two or more sensors (e.g., fused data). In at least some examples, the points may have an associated position, e.g., in an x-y coordinate system. In some examples, the sensor data can be processed to determine a two-dimensional representation of the environment (e.g., top-down multi-channel data, vector data, an occupancy grid, etc.). At operation504, the process500can include determining, based at least in part on the sensor data, a first representation of a first object in the environment and a second representation of a second object in the environment. For instance, the computing device can generate a first object representation214to represent a tractor and a second object representation216to represent a trailer. The first object representation214or the second object representation216can be a bounding box having a length and a width of the respective object as a top-down view. At operation506, the process500can include applying, by a model, a size heuristic, a distance heuristic, and a joining point heuristic to the first representation and the second representation. For instance, the operation506can include the articulated object modelling component116applying one or more heuristics and/or algorithms to the first object representation214and the second object representation216to identify a relationship between sizes, distances, and/or points of the object representations. As detailed above inFIGS.1and2, the articulated object modelling component116can apply mathematical techniques to identify a size of an object representation, a distance between object representations, an intersection between object representations, and/or an overlap between object representations. At operation508, the process500can include determining, by the model and based at least in part on the applying, that the first object and the second object are joined in the environment. For example, the operation508can include the articulated object modelling component116determining to join the first object208and the second object210based at least in part on the size of an object representation and/or the distance, the intersection, or the overlap between the object representations. For example, the size of the first object representation214can be compared to a size threshold, and combined with the second object representation216when the size meets or exceeds a size threshold. Additionally or alternatively, the articulated object modelling component116can join the first object208and the second object210based on determining that at least some portions of the objects overlap and/or intersect. At operation510, the process500can include classifying the first object and the second object as an articulated object. For example, the articulated object modelling component116can generate output data206classifying the first object208and the second object210as a single articulated object (e.g., the articulated object220). In this way, the articulated object modelling component116can detect presence of an articulated object in the environment, and send information about the articulated object to one or more other components of the computing device. At operation512, the process500can include controlling the vehicle in the environment relative to the articulated object. In some examples, the operation512can include a planning component (e.g., planning component424) of the vehicle computing system using the predictions received from the articulated object modelling component116and/or the articulated object motion model126to control a vehicle as it navigates in an environment (vehicle102using the trajectory306). In various examples, predictions from the first model434A, the second model434B, and/or the Nth model434N enable a planning component of the vehicle to improve how the vehicle navigates (avoids objects) in the environment. For example, the computing device can determine a trajectory for the vehicle based at least in part on the output from the articulated object modelling component116indicating presence of the articulated object. In some examples, data representing an output from a model is sent to a perception component (e.g., perception component422) to change at least one of a resolution, a bit rate, a rate of capture, or a compression at which sensor data is captured or stored. In various examples, setting(s) associated with the sensor system (e.g., sensor system406) may be adjusted to cause one or more sensors of the vehicle to change operation based at least in part on a signal output from a model and sent to the perception component. The articulated object modelling component116 FIG.6is a flowchart illustrating an example process600for determining potential states of an articulated object at a future time using one or more example models. For example, some or all of the process600can be performed by one or more components inFIG.4, as described herein. For example, some or all of the process600can be performed by the vehicle computing device404or the computing device202. However, the process600is not limited to being performed by these components, and the components are not limited to performing the process600. At operation602, the process can include receiving sensor data from a sensor associated with a vehicle in an environment. In some examples, the operation602can include a computing device receiving sensor data from one or more sensors on the vehicle and/or from one or more remote sensors. Techniques described herein may be useful to determine presence and/or motion of an articulated object. In some examples, the sensor data can include data fused from one or more sensor modalities, including a time-of-flight sensor, LIDAR, RADAR, or the like. At operation604, the process600can include determining, based at least in part on the sensor data, presence of an articulated object in the environment, the articulated object including a first portion and a second portion. For example, the computing device202can employ the articulated object modelling component116to classify two or more objects as an articulated object, and output an indication of the articulated object to one or more other components of the vehicle computing device(s)404. The articulated object can include at least two portions such as a front portion and one or more rear portions. At operation606, the process600can include inputting, into a model, state data associated with the articulated object at a first time. For example, the articulated object motion model126can receive object state data such as position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw rate data, or turning rate data associated with one or more portions of the articulated object usable to determine relative movement, e.g., velocity, position, acceleration, and so on of both of portions of the articulated object. In some examples, the computing device can determine the state data based on comparing historical sensor data to determine position, orientation, heading, velocity, and so on of objects having a same object type. At operation608, the process600can include determining, by the model and based at least in part on the state data, a mathematical relationship between the first portion and the second portion of the articulated object. In some examples, the operation608can include the articulated object motion model126determining a joint offset value indicating a displacement between the first portion (the first object representation214) and the second portion (the second object representation216). Additionally or alternatively, the articulated object motion model126can use the state data to determine a covariance and/or a mean between the two portions. In some examples the operation608can include implementing linear algebra algorithms that determine a relationship between the first portion and the second portion of the articulated object. The articulated object motion model126can also, or instead, employ filtering techniques, such as applying a Kalman filter, to select points associated with the first object representation214and/or the second object representation216. Based on the selected points, the computing device determine motion of the first portion relative to the second portion. At operation610, the process600can include receiving, as an output from the model and based at least in part on the mathematical relationship, a predicted state of the first portion and the second portion of the articulated object at a second time after the first time. In some examples, the operation610can include the articulated object motion model126using information about the mathematical relationship to predict a combined state of the first portion and the second portion at a future time. For example, the computing device can determine estimated states of the articulated object based at least in part on the filtering techniques discussed herein. At operation612, the process600can include controlling the vehicle in the environment based at least in part on the predicted state of the articulated object. For instance, the vehicle computing device404can determine a trajectory for the vehicle402based on the predicted state of the first portion and the second portion in the future. In some instances, the operation612can include generating commands that can be relayed to a controller onboard an autonomous vehicle to control the autonomous vehicle to drive a travel path according to the trajectory. Although discussed in the context of an autonomous vehicle, the process600, and the techniques and systems described herein, can be applied to a variety of systems utilizing sensors. The methods described herein represent sequences of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. In some examples, one or more operations of the method may be omitted entirely. For instance, the operations may include determining a first action and a second action by the vehicle relative to a selected trajectory without determining a respective cost for one or more of the actions by the vehicle. Moreover, the methods described herein can be combined in whole or in part with each other or with other methods. The various techniques described herein may be implemented in the context of computer-executable instructions or software, such as program modules, that are stored in computer-readable storage and executed by the processor(s) of one or more computing devices such as those illustrated in the figures. Generally, program modules include routines, programs, objects, components, data structures, etc., and define operating logic for performing particular tasks or implement particular abstract data types. Other architectures may be used to implement the described functionality and are intended to be within the scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, the various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. Similarly, software may be stored and distributed in various ways and using different means, and the particular software storage and execution configurations described above may be varied in many different ways. Thus, software implementing the techniques described above may be distributed on various types of computer-readable media, not limited to the forms of memory that are specifically described. Example Clauses Any of the example clauses in this section may be used with any other of the example clauses and/or any of the other examples or embodiments described herein.A: A system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions executable by the one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising: receiving sensor data from a sensor associated with a vehicle in an environment; determining, based at least in part on the sensor data, a first representation of a first object in the environment and a second representation of a second object in the environment; applying, by a model, a size heuristic, a distance heuristic, and a joining point heuristic to the first representation and the second representation; determining, by the model and based at least in part on the applying, that the first object and the second object are joined in the environment; classifying the first object and the second object as an articulated object; and controlling the vehicle in the environment relative to the articulated object.B: The system of paragraph A, wherein: the first representation or the second representation includes a top-down representation, and the size heuristic, the distance heuristic, or the joining point heuristic applied by the model comprises a mathematical algorithm.C: The system of paragraph A or B, wherein applying the size heuristic to the first representation and the second representation comprises: determining a first size of the first representation and a second size of the second representation; and comparing the first size or the second size to a size threshold, wherein determining that the first representation of the first object and the second representation of a second object are joined is based at least in part on the first size or the second size meeting or exceeding the size threshold.D: The system of any of paragraphs A-C, wherein applying the distance heuristic to the first representation and the second representation comprises: determining a distance between a first point of the first representation and a second point of the second representation; and comparing the distance to a distance threshold, wherein determining that the first representation of the first object and the second representation of a second object are joined is based at least in part on the distance being less than the distance threshold.E: The system of any of paragraphs A-D, the operations further comprising: determining a predicted position of the articulated object at a future time, wherein controlling the vehicle in the environment relative to the articulated object comprises determining a planned trajectory for the vehicle based at least in part on the predicted position of the articulated object.F: A method comprising: receiving sensor data from a sensor associated with a vehicle in an environment; determining, based at least in part on the sensor data, a first representation of a first object in the environment and a second representation of a second object in the environment; applying, by a model, one or more heuristics to the first representation and the second representation; and joining, by the model and based at least in part on the applying, the first object and the second object as an articulated object.G: The method of paragraph F, wherein the model is a first model, and further comprising: determining, by a second model and based at least in part on the sensor data, data comprising a top-down representation of an environment; inputting the data into the first model; and controlling the vehicle in the environment relative to the articulated object.H: The method of paragraph F or G, wherein the one or more heuristics applied by the model comprises one or more of: a physical heuristic, a physics algorithm, or a linear algebra algorithm.I: The method of any of paragraphs F-H, wherein applying the one or more heuristics to the first representation and the second representation comprises: determining a first size of the first representation and a second size of the second representation; and comparing the first size or the second size to a size threshold, wherein joining the first representation of the first object and the second representation of a second object is based at least in part on the first size or the second size meeting or exceeding the size threshold.J: The method of any of paragraphs F-I, wherein applying the one or more heuristics to the first representation and the second representation comprises: determining a distance between a first point of the first representation and a second point of the second representation; and comparing the distance to a distance threshold, wherein joining the first representation of the first object and the second representation of a second object is based at least in part on the distance being less than the distance threshold.K: The method of any of paragraphs F-J, further comprising: determining a predicted position of the articulated object at a future time, wherein controlling the vehicle in the environment relative to the articulated object comprises determining a planned trajectory for the vehicle based at least in part on the predicted position of the articulated object.L: The method of any of paragraphs F-K, wherein applying the one or more heuristics to the first representation and the second representation comprises: performing at least one of: determining that the first representation and the second representation overlap; or determining that a first midline of the first representation intersects with a second midline of the second representation, wherein joining the first representation of the first object and the second representation of a second object is based at least in part on the first representation and the second representation overlapping or the first midline and the second midline intersecting.M: The method of any of paragraphs F-L, wherein the joining is associated with a first time, and further comprising: receiving additional sensor data from the sensor at a second time after the first time; applying, by the model, the one or more heuristics to the first representation and the second representation of the articulated object at the second time; and disjoining, based at least in part on the applying at the second time, the first object and the second object as the articulated object.N: The method of any of paragraphs F-M, wherein: the first representation of the first object is a first shape having a first boundary, the second representation of the second object is a second shape having a second boundary, and the first shape or the second shape includes two dimensions or three-dimensions.O: The method of any of paragraphs F-N, wherein joining the first representation of the first object and the second representation of the second object as the articulated object is further based at least in part on a control policy comprising information identifying a right of way or a rule of an intersection associated with the first object and the second object in the environment.P: The method of any of paragraphs F-O, further comprising: determining a first object type of the first object and a second object type of the second object, the first object type or the second object type including at least one of: a car, a truck, a trailer, or a boat; and comparing, as a comparison, the first object type and the second object type, wherein joining the first representation of the first object and the second representation of the second object as the articulated object is further based at least in part on the comparison.Q: One or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the one or more processors to perform operations comprising: receiving sensor data from a sensor associated with a vehicle in an environment; determining, based at least in part on the sensor data, a first representation of a first object in the environment and a second representation of a second object in the environment; applying, by a model, one or more heuristics to the first representation and the second representation; and joining, by the model and based at least in part on the applying, the first object and the second object as an articulated object.R: The one or more non-transitory computer-readable media of paragraph Q, wherein the one or more heuristics applied by the model comprises one or more of: a physical heuristic, a physics algorithm, or a linear algebra algorithm.S: The one or more non-transitory computer-readable media of paragraph Q or R, wherein applying the one or more heuristics to the first representation and the second representation comprises: determining a first size of the first representation and a second size of the second representation; and comparing the first size or the second size to a size threshold, wherein joining the first representation of the first object and the second representation of a second object is based at least in part on the first size or the second size meeting or exceeding the size threshold.T: The one or more non-transitory computer-readable media of any of paragraphs Q-S, wherein applying the one or more heuristics to the first representation and the second representation comprises: determining a distance between a first point of the first representation and a second point of the second representation; and comparing the distance to a distance threshold, wherein joining the first representation of the first object and the second representation of a second object is based at least in part on the distance being less than the distance threshold.U: A system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions executable by the one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising: receiving sensor data from a sensor associated with a vehicle in an environment; determining, based at least in part on the sensor data, presence of an articulated object in the environment, the articulated object including a first portion and a second portion; inputting, into a model, state data associated with the first portion of the articulated object at a first time; determining, by the model and based at least in part on the state data, a covariant relationship between the first portion and the second portion of the articulated object; receiving, as an output from the model and based at least in part on the covariant relationship, a predicted state of the second portion of the articulated object at a second time after the first time; and controlling the vehicle in the environment based at least in part on the predicted state of the articulated object.V: The system of paragraph U, the operations further comprising: applying, by the model, a Kalman filter algorithm to the state data to determine the covariant relationship between the first portion and the second portion, wherein the output by the model is based at least in part on the Kalman filter algorithm.W: The system of paragraph V, wherein the Kalman filter algorithm is a derivative free Kalman filter algorithm.X: The system of any of paragraphs U-W, wherein the state data is associated with at least one of the first portion or the second portion and comprises one or more of: position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw data, yaw rate data, distance data indicating a distance from an edge of the first portion or the second portion to an intersection point between the first portion and the second portion, or turning rate data associated with the articulated object.Y: The system of any of paragraphs U-X, wherein: the first portion is a front portion of the articulated object relative to a direction of travel, the second portion is a rear portion of the articulated object relative to the direction of travel, the predicted state includes position data, yaw data, or velocity data, and the output from the model identifies a covariance between a first point in the first portion and a second point in the second portion.Z: A method comprising: detecting an articulated object in an environment, the articulated object including a first portion and a second portion; inputting state data associated with the first portion of the articulated object into a model that defines a relationship between the first portion and the second portion of the articulated object; receiving, as an output from the model and based at least in part on the relationship, a predicted state of the second portion of the articulated object at a future time; and controlling a vehicle in the environment based at least in part on predicted state of the articulated object.AA: The method of paragraph Z, further comprising: applying, by the model, a filtering algorithm to the state data to determine the relationship between the first portion and the second portion, wherein the output by the model is based at least in part on the filtering algorithm.AB: The method of paragraph AA, wherein the filtering algorithm is an derivative free Kalman filter algorithm.AC: The method of any of paragraphs Z-AB, wherein the state data is associated with at least one of the first portion or the second portion and comprises one or more of: position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw data, yaw rate data, distance data indicating a distance from an edge of the first portion or the second portion to an intersection point between the first portion and the second portion, or turning rate data associated with the articulated object.AD: The method of any of paragraphs Z-AC, wherein: the first portion is a front portion of the articulated object relative to a direction of travel, the second portion is a rear portion of the articulated object relative to the direction of travel, the predicted state includes position data, yaw data, or velocity data, and the model identifies a covariance between a first point in the first portion and a second point in the second portion.AE: The method of any of paragraphs Z-AD, further comprising: receiving sensor data from one or more sensors associated with the vehicle in the environment; and updating, based at least in part on the sensor data, the relationship between the first portion and the second portion of the articulated object.AF: The method of any of paragraphs Z-AE, further comprising: determining an offset value between a first distance, a first velocity, or a first yaw associated with the first portion and a second distance, a second velocity, or a second yaw associated with the second portion of the articulated object, wherein the output from the model identifying the predicted state of the first portion and the second portion is based at least in part on the offset value.AG: The method of paragraph AF, wherein the relationship comprises a velocity covariance, a yaw covariance, or a distance covariance between the first portion and the second portion.AH: The method of any of paragraphs Z-AG, further comprising: determining a first velocity of the first portion or a second velocity of the second portion, wherein the output from the model identifying the predicted state of the second portion is based at least in part on the first velocity or the second velocity.AI: The method of any of paragraphs Z-AH, further comprising: determining a direction of travel of the articulated object; determining, based at least in part on the direction of travel, the first portion or the second portion as a front portion, wherein the output from the model identifying the predicted state of the front portion.AJ: The method of any of paragraphs Z-AI, further comprising: receiving first sensor data from a first sensor and second sensor data from a second sensor different from the first sensor, the first sensor and the second sensor associated with the vehicle in the environment; and determining a joint point between the first portion and the second portion based at least in part on the first sensor data and the second sensor data, wherein the output from the model identifying the predicted state of the articulated object is based at least in part on the joint point.AK: One or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the one or more processors to perform operations comprising: detecting an articulated object in an environment, the articulated object including a first portion and a second portion; inputting state data associated with the articulated object into a model; determining, by the model and based at least in part on the state data, a relationship between the first portion and the second portion of the articulated object; receiving, as an output from the model and based at least in part on the relationship, a predicted state of the first portion and the second portion of the articulated object at a future time; and controlling a vehicle in the environment based at least in part on predicted state of the articulated object.AL: The one or more non-transitory computer-readable media of paragraph AK, the operations further comprising: applying, by the model, a filtering algorithm to the state data to determine the relationship between the first portion and the second portion, wherein the output by the model is based at least in part on the filtering algorithm.AM: The one or more non-transitory computer-readable media of paragraph AL, wherein the filtering algorithm is an unscented Kalman filter algorithm.AN: The one or more non-transitory computer-readable media of any of paragraphs AK-AM, wherein the state data is associated with at least one of the first portion or the second portion and comprises one or more of: position data, orientation data, heading data, velocity data, speed data, acceleration data, yaw rate data, or turning rate data associated with the articulated object. While the example clauses described above are described with respect to one particular implementation, it should be understood that, in the context of this document, the content of the example clauses can also be implemented via a method, device, system, computer-readable medium, and/or another implementation. Additionally, any of examples A-AN may be implemented alone or in combination with any other one or more of the examples A-AN. CONCLUSION While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein can be presented in a certain order, in some cases the ordering can be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results. | 127,089 |
11858530 | DESCRIPTION OF EMBODIMENT A management device according to one aspect of the present disclosure includes: a circuit; and at least one memory, wherein the circuit, in operation, in a case where a vehicle is to drive from a predetermined position to a target spot via a plurality of areas, generates driving data of each of the plurality of areas, the driving data indicating a driving path in the area and causing the vehicle to autonomously drive along the driving path; while the vehicle is driving in each of the plurality of areas, transmits driving data of an area next to the area to the vehicle via a base station of the area; and when driving data of a second area that is next to a first area that is one of the plurality of areas is not transmitted to the vehicle via a base station of the first area while the vehicle is autonomously driving in the first area, after the vehicle has driven back to a previous area where the vehicle was driving before driving into the first area, transmits driving data of a different area that is located at a position from the previous area to the target spot to the vehicle via a base station of the previous area. With this configuration, for example, even if a failure occurs in the base station of the first area and the vehicle cannot receive the driving data of the second area, the vehicle can receive the driving data of the different area by driving back to the previous area. Accordingly, the vehicle can autonomously drive from the previous area to the target spot via the different area. That is, even if a failure or the like occurs in the base station, it is possible to cause the vehicle to appropriately drive to the target spot. In the device disclosed in PTL 1 described above, for example, if a failure occurs in a base station, the vehicle cannot receive the data needed by the vehicle to drive through a coverage area next to the coverage area of the base station. As a result, a problem arises in that the vehicle cannot arrive at a parking space that is the target spot. On the other hand, with the management device according to one aspect of the present disclosure, it is possible to cause the vehicle to appropriately drive to the target spot. Also, the circuit may transmit the driving data of the second area as the driving data of the different area. With this configuration, the vehicle can autonomously drive from the previous area to the target spot via the first area and the second area. Also, when there are two routes: a first route that is a route from the previous area to the target spot via the first area and the second area; and a second route that is a route from the previous area to the target spot via a third area that is different from the first area and the second area, the circuit may transmit driving data of the third area as the driving data of the different area. With this configuration, the vehicle can autonomously drive from the previous area to the target spot along the second route. Also, when the driving data of the second area is not transmitted to the vehicle while the vehicle is driving in the first area, and a following vehicle behind the vehicle is to drive in the first area and the second area via the previous area, the circuit may transmit driving data of the first area and the driving data of the second area to the following vehicle via the base station of the previous area while the following vehicle is driving in the previous area. With this configuration, the following vehicle has already received the driving data of the second area when the following vehicle drives in the first area, and thus even if a failure occurs in the base station of the first area, the following vehicle can autonomously drive to the target spot via the first area and the second area without driving back to the previous area. Also, when the driving data of the second area is not transmitted to the vehicle while the vehicle is driving in the first area, the circuit may further generate, for a following vehicle behind the vehicle, driving data of at least one area that is located on a route from the predetermined position to the target spot without passing through the first area. With this configuration, if a failure occurs in the base station of the first area, the following vehicle can autonomously drive to the target spot without driving through the first area. Also, a vehicle control device according to one aspect of the present disclosure is a vehicle control device that is mounted on a vehicle, the vehicle control device including: a circuit; and at least one memory, wherein the circuit, in operation, in a case where the vehicle is to drive from a predetermined position to a target spot via a plurality of areas, while the vehicle is driving in each of the plurality of areas, receives driving data that indicates a driving path in an area next to the area and is transmitted from a base station of the area; causes the vehicle to autonomously drive in each of the plurality of areas in accordance with the driving data of the area; when driving data of a second area that is next to a first area that is one of the plurality of areas is not received while the vehicle is autonomously driving in the first area, causes the vehicle to drive back to a previous area where the vehicle was driving before driving into the first area; and after the vehicle has driven back to the previous area, receives driving data of a different area that is located at a position from the previous area to the target spot, the driving data being transmitted from a base station of the previous area. With this configuration, for example, if a failure occurs in the base station of the first area and the circuit does not receive the driving data of the second area, the vehicle is caused to drive back to the previous area, and thus the circuit can receive the driving data of the different area. Accordingly, the vehicle can autonomously drive from the previous area to the target spot via the different area. That is, even if a failure or the like occurs, it is possible to cause the vehicle to appropriately drive to the target spot. Also, the circuit may receive the driving data of the second area as the driving data of the different area. With this configuration, the vehicle can autonomously drive from the previous area to the target spot via the first area and the second area. Also, when there are two routes: a first route that is a route from the previous area to the target spot via the first area and the second area; and a second route that is a route from the previous area to the target spot via a third area that is different from the first area and the second area, the circuit may receive driving data of the third area as the driving data of the different area. With this configuration, the vehicle can autonomously drive from the previous area to the target spot along the second route. General and specific aspects disclosed above may be implemented using a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be implemented using any combination of systems, methods, integrated circuits, computer programs, or computer-readable recording media. Hereinafter, an embodiment will be described specifically with reference to the drawings. The embodiment described below shows a generic and specific example of the present disclosure. The numerical values, shapes, materials, structural elements, the arrangement and connection of the structural elements, steps, the order of the steps, and the like shown in the following embodiment are merely examples, and therefore are not intended to limit the scope of the present disclosure. Also, among the structural elements described in the following embodiment, structural elements not recited in any one of the independent claims are described as arbitrary structural elements. In addition, the diagrams are schematic representations, and thus are not necessarily true to scale. Also, in the diagrams, structural elements that are the same are given the same reference numerals. Embodiment <System Configuration> FIG.1is a diagram showing an example of a configuration of a vehicle driving management system according to an embodiment. Vehicle driving management system100according to the present embodiment is a system that controls the autonomous driving of vehicle V in an automated valet parking environment, the system being configured to cause vehicle V to autonomously drive from a parking facility entrance to a parking space that is the target spot and then autonomously park in the parking space. Furthermore, vehicle driving management system100causes vehicle V that is parking in the parking space to autonomously drive to the parking facility entrance. Vehicle driving management system100described above includes a plurality of base stations, management device10, and a vehicle control device that is mounted on vehicle V. A parking facility includes an entrance and a plurality of areas. For example, the plurality of base stations are disposed in the plurality of areas in one to one correspondence. Each of the plurality of base stations transmits driving data to vehicle V that is driving in an area that is covered by the base station. In the present embodiment, the plurality of areas are distinguished from each other by being represented by area (1), area (2), . . . , area (n−1), area (n), and area (n+1). Variable n in the parentheses is an integer of 1 or more. Likewise, the plurality of base stations are also distinguished from each other by being represented by base station (1), base station (2), . . . , base station (n−1), base station (n), and base station (n+1). The value and variable n in the parentheses show an association between an area and a base station. For example, base station (1) is associated with area (1) covered by base station (1), and transmits driving data to vehicle V that is driving in area (1). The same applies to other base stations. Also, the behaviors of vehicle V such as driving and parking in the parking facility are autonomous behaviors. As used herein, the term “autonomous behaviors” mean that vehicle V moves, drives or parks automatically, or in other words, without an operation of the driver. Hereinafter, autonomous driving and autonomous parking in the parking facility may also be referred to simply as driving and parking, respectively. Management device10according to the present embodiment generates driving data of each area to cause vehicle V that has arrived at the parking facility entrance to autonomously drive to a parking space that is the target spot in the parking facility. For example, in the case where the parking space that is the target spot is located in area (n+1), management device10generates driving data (1) of area (1), driving data (2) of area (2), . . . , driving data (n−1) of area (n−1), driving data (n) of area (n), and driving data (n+1) of area (n+1). These driving data items are data items each indicating a driving path in the corresponding area of the driving data item. At the parking facility entrance, vehicle V receives driving data (1) of area (1) transmitted from management device10via a base station disposed at the entrance, and starts driving autonomously in area (1) in accordance with driving data (1). Vehicle V receives driving data (2) of area (2) transmitted from management device10via base station (1) that covers area (1) while driving in area (1). That is, vehicle V receives the driving data of an area next to the area in which vehicle V is driving, before driving into the next area. Accordingly, when vehicle V has driven through area (1), vehicle V immediately starts driving in area (2) in accordance with driving data (2). In the same manner as described above, while vehicle V is driving in area (2), vehicle V receives driving data (3) of area (3) transmitted from management device10via base station (2) that covers area (2). By repeatedly receiving driving data and autonomously driving in accordance with the received driving data, vehicle V can arrive at and park in the parking space that is the target spot. FIG.2is a diagram showing an example of an area included in the parking facility. For example, area (n) includes a plurality of parking spaces. In parking spaces determined by management device10, vehicles are already parking. Vehicle V that is autonomously driving receives driving data (n) of area (n) transmitted from base station (n−1) while driving in area (n−1) that is an area before driving into area (n). Accordingly, vehicle V drives in accordance with the driving path indicated by driving data (n). Also, vehicle V receives driving data (n+1) of area (n+1) transmitted from base station (n) that covers area (n) while driving in area (n). FIG.3is a block diagram showing a configuration example of management device10and vehicle control device20included in vehicle driving management system100according to the present embodiment. Management device10includes driving data generator11, communication controller12, management communicator13, and management storage15. Driving data generator11generates driving data for each area. Specifically, in the case where vehicle V is to drive from a predetermined position to a target spot via a plurality of areas, driving data generator11generates driving data of each of the plurality of areas. The generated driving data is data that indicates a driving path in the area and causes vehicle V to autonomously drive along the driving path. In the present embodiment, the predetermined position is the parking facility entrance. Then, driving data generator11stores each generated driving data in management storage15. Management storage15is a recording medium for storing the driving data generated by driving data generator11. Management storage15may be, for example, a hard disk drive, a RAM (Random Access Memory), a ROM (Read Only Memory), a semiconductor memory, or the like. Management storage15may be volatile or non-volatile. Management communicator13transmits the above-described driving data to vehicle V via a base station. The communication between management communicator13and each base station may be performed by using wired communication or wireless communication. Also, the communication between vehicle V and each base station is performed by using wireless communication. The wireless communication scheme may be Wi-Fi (registered trademark), Bluetooth (registered trademark), ZigBee, specified low-power radio communication scheme, or any other communication scheme. While vehicle V is driving in each of the plurality of areas, communication controller12causes management communicator13to transmit the driving data of an area next to the area to vehicle V via the base station of the area. For example, while vehicle V is driving in area (n−1), communication controller12reads driving data (n) of area (n) from management storage15. Then, communication controller12causes management communicator13to transmit driving data (n) of area (n) to vehicle V via base station (n−1) that covers area (n−1). Vehicle control device20is a device mounted on vehicle V that includes driver30, and includes surroundings monitor21, driving controller22, vehicle communicator23, and vehicle storage25. Driver30includes at least one actuator for driving or steering the wheels of vehicle V, and the at least one actuator may be, for example, a motor, an engine, or the like. Surroundings monitor21is, for example, a camera or a sensor such as an ultrasonic sensor, and monitors the state of the surroundings to detect an obstacle or the like in the surroundings of the vehicle V Vehicle communicator23receives the driving data transmitted from management communicator13of management device10via a base station. Specifically, in the case where vehicle V drives from the parking facility entrance to the target spot via a plurality of areas, while vehicle V is driving in each of the plurality of areas, vehicle communicator23receives driving data that indicates a driving path in an area next to the area and is transmitted from the base station of the area. When vehicle communicator23receives the driving data, vehicle communicator23stores the driving data in vehicle storage25. For example, while vehicle V is driving in area (n−1), vehicle communicator23receives driving data (n) of area (n) transmitted from base station (n−1) of area (n−1). Vehicle storage25is a recording medium for storing the driving data received by vehicle communicator23. Vehicle storage25may be, for example, a hard disk drive, a RAM (Random Access Memory), a ROM (Read Only Memory), a semiconductor memory, or the like. Vehicle storage25may be volatile or non-volatile. Driving controller22controls driver30to cause vehicle V to autonomously drive in each of the plurality of areas in accordance with the driving data of the area. That is, driving controller22controls driver30such that vehicle V drives along the driving path indicated by the driving data. At this time, driving controller22causes vehicle V to autonomously drive while preventing vehicle V from coming into contact with an obstacle or the like detected by surroundings monitor21. FIG.4is a diagram illustrating a processing operation performed by vehicle driving management system100according to the present embodiment. For example, when vehicle V autonomously drives from area (n−2) to area (n−1), at this time, driving controller22of vehicle control device20detects the entry of vehicle V into area (n−1) based on the result of monitoring performed by surroundings monitor21and driving data (n−1) of area (n−1). Then, driving controller22controls driver30to cause vehicle V to start autonomous driving in area (n−1) in accordance with driving data (n−1). That is, driving controller22causes vehicle V to start driving along the driving path indicated by driving data (n−1). Furthermore, at this time, driving controller22causes vehicle communicator23to transmit a request signal for requesting driving data. The request signal is a signal for requesting the driving data of an area next to area (n−1), and is transmitted to management device10via base station (n−1) of area (n−1). Management communicator13of management device10receives the request signal from vehicle V via base station (n−1). When the request signal is received, communication controller12reads driving data (n) of area (n) that is the area next to area (n−1) from management storage15. Then, communication controller12causes management communicator13to transmit driving data (n) to vehicle V via base station (n−1). Vehicle communicator23of vehicle control device20receives driving data (n) transmitted from management device10via base station (n−1) while vehicle V is autonomously driving in area (n−1). When driving data (n) is received, driving controller22stores driving data (n) in vehicle storage25. Next, vehicle V drives into area (n) from area (n−1). At this time as well, in the same manner as described above, driving controller22of vehicle control device20detects the entry of vehicle V into area (n) based on the result of monitoring performed by surroundings monitor21and driving data (n) of area (n). Then, driving controller22controls driver30to cause vehicle V to autonomously drive in area (n) in accordance with driving data (n). That is, driving controller22causes vehicle V to start driving along the driving path indicated by driving data (n). Furthermore, at this time as well, driving controller22causes vehicle communicator23to transmit a request signal for requesting driving data. The request signal is a signal for requesting the driving data of an area next to area (n), and is transmitted to management device10via base station (n) of area (n). Management communicator13of management device10receives the request signal from vehicle V via base station (n). When the request signal is received, communication controller12reads driving data (n+1) of area (n+1) that is the area next to area (n) from management storage15. Then, communication controller12causes management communicator13to transmit driving data (n+1) to vehicle V via base station (n). However, if a failure occurs in base station (n), driving data (n+1) is not transmitted from base station (n) to vehicle V. In this case, because driving data (n+1) is not received by vehicle communicator23, driving controller22of vehicle control device20according to the present embodiment determines whether a failure has occurred in base station (n). Then, if it is determined that a failure has occurred in base station (n), driving controller22controls driver30to cause vehicle V to drive, for example, in an opposite direction along the driving path indicated by driving data (n). As a result, vehicle V drives back to area (n−1). At this time, driving controller22detects, based on the result of monitoring performed by surroundings monitor21and driving data (n−1) of area (n−1), that vehicle V has driven back to area (n−1). Then, driving controller22causes vehicle communicator23to transmit a request signal. The request signal is a signal for requesting the driving data of an area next to area (n), and is transmitted to management device10via base station (n−1) of area (n−1). Management communicator13of management device10receives the request signal from vehicle V via base station (n−1). When the request signal is received, communication controller12reads driving data (n+1) of area (n+1) that is an area next to area (n) from management storage15. Then, communication controller12causes management communicator13to transmit driving data (n+1) to vehicle V via base station (n−1). Vehicle communicator23of vehicle control device20receives driving data (n+1) transmitted from management device10via base station (n−1) while vehicle V is driving back to area (n−1). When driving data (n+1) is received, driving controller22stores driving data (n+1) in vehicle storage25. As a result, driving data (n) and driving data (n+1) are stored in vehicle storage25. Then, driving controller22controls driver30to cause vehicle V to drive into area (n) from area (n−1), and also controls driver30to cause vehicle V to drive in accordance with driving data (n) and driving data (n+1). With this configuration, vehicle V drives from area (n−1) to area (n) and further drives to area (n+1). Then, vehicle V parks in a parking space that is the target spot. As described above, there is a case where the driving data of a second area that is next to a first area that is one of the plurality of area is not transmitted from management communicator13of management device10to vehicle V via a base station that covers the first area while vehicle V is autonomously driving in the first area. In other words, there is a case where the driving data of the second area that is next to the first area is not received by vehicle communicator23of vehicle control device20while vehicle V is driving in the first area that is one of the plurality of areas. For example, there is a case where the driving data of the second area is not transmitted to vehicle V due to the occurrence of a failure in the base station of the first area, and thus the driving data is not received by vehicle communicator23of vehicle V. In the example shown inFIG.4, area (n) corresponds to the first area, and area (n+1) corresponds to the second area. To address this, driving controller22of vehicle control device20controls driver30to cause vehicle V to drive back to a previous area where vehicle V was driving before driving into the first area. Then, when vehicle V has moved back to the previous area where vehicle V was driving before driving into the first area, communication controller12of management device10causes management communicator13to transmit the driving data of a different area that is located at a position from the previous area to the target spot to vehicle V via a base station that covers the previous area. As a result, when vehicle V has moved back to the previous area, vehicle communicator23of vehicle control device20receives the driving data of the different area that is located at a position from the previous area to the target spot, the driving data being transmitted from the base station of the previous area. In the example shown inFIG.4, area (n−1) corresponds to the previous area. In the present embodiment, communication controller12of management device10causes management communicator13to transmit the driving data of the second area as the driving data of the different area. Then, vehicle communicator23of vehicle control device20receives the driving data of the second area as the driving data of the different area. In the example shown inFIG.4, area (n+1) corresponds to the different area and the second area. With this configuration, even if a failure occurs in base station (n) of area (n) and vehicle V cannot receive driving data (n+1) of area (n+1) from base station (n), as a result of vehicle V driving back to area (n−1), vehicle V can receive driving data (n+1) of area (n+1). Accordingly, vehicle V can autonomously drive from area (n−1) to the target spot via area (n) and area (n+1). That is, even if a failure or the like occurs, it is possible to cause vehicle V to appropriately drive to the target spot. FIG.5is a flowchart showing an example of an overall processing operation performed by vehicle control device20. When vehicle V arrives at the parking facility entrance, driving controller22of vehicle control device20first initializes variable n to 1 (step S1). Then, vehicle communicator23receives driving data (n) of area (n) at the parking facility entrance. For example, vehicle communicator23receives driving data (n) from management device10via the base station disposed at the entrance. When driving data (n) has been received, driving controller22controls driver30to cause vehicle V to start autonomous driving (step S2). At this time, driving controller22may cause vehicle V to start autonomous driving when a driving start signal transmitted from management device10is received by vehicle communicator23. The driving start signal is a signal that instructs vehicle V to start autonomous driving. Next, driving controller22causes vehicle communicator23to start receiving driving data (n+1) of area (n+1) from base station (n) while vehicle V is autonomously driving in area (n) (step S3). At this time, driving controller22may cause vehicle communicator23to receive driving data (n+1) by causing vehicle communicator23to transmit a request signal as described above to management device10. Here, driving controller22determines whether it is possible to receive driving data (n+1) (step S4). If, for example, driving data (n+1) is not received by vehicle communicator23within a predetermined period of time, or it is not possible to receive the entire driving data (n+1), driving controller22determines that it is not possible to receive driving data (n+1). If it is determined that it is not possible to receive driving data (n+1) (No in step S4), driving controller22executes reception failure processing (step S100). On the other hand, if it is determined that it is possible to receive driving data (n+1) (Yes in step S4), driving controller22controls driver30and vehicle communicator23to complete the driving of vehicle V in area (n) and the reception of driving data (n+1) of area (n+1) (step S5). Driving controller22may, when the reception of driving data (n+1) is completed, cause vehicle communicator23to transmit a reception completion signal that indicates that driving data (n+1) has been received to management device10via base station (n). Then, driving controller22increments variable n (step S6). Next, after the processing in step S6or step S100, driving controller22determines whether the driving data of the target area has been received (step S7). As used herein, the term “target area” refers to an area in which the parking space that is the target spot is located. That is, driving controller22determines whether driving data (n+1) received in step S5is the driving data of the target area. If it is determined that the driving data of the target area has not been received (No in step S7), driving controller22repeatedly performs the processing from step S3. On the other hand, if it is determined that the driving data of the target area has been received (Yes in step S7), driving controller22controls driver30to cause vehicle V to drive to the parking space in the target area and park in the parking space (step S8). FIG.6is a flowchart showing an example of reception failure processing performed by vehicle control device20. That is,FIG.6is a flowchart showing a detailed processing operation performed in step S100shown inFIG.5. First, driving controller22controls driver30to cause vehicle V to temporarily stop (step S101). Then, driving controller22determines whether the cause of not receiving driving data (n+1) is a failure in vehicle V, or in other words, whether a failure has occurred in vehicle communicator23of vehicle V (step S102). For example, driving controller22compares a reception result of another communication means included in vehicle control device20with a reception result of vehicle communicator23. If the reception results do not match, driving controller22determines that a failure has occurred in vehicle communicator23. Here, if it is determined that a failure has occurred in vehicle communicator23and not in base station (n) (Yes in step S102), driving controller22controls driver30to cause vehicle V to autonomously drive by using driving data (1) to driving data (n) that are stored in vehicle storage25. Then, driving controller22causes vehicle V to temporarily drive into and stay at a vacant space in any one of area (1) to area (n). Alternatively, driving controller22causes vehicle V to drive back to the parking facility entrance and temporarily stay at the entrance (step S109). At this time, management device10can recognize the failure in vehicle V by not receiving a communication response from vehicle V, for example, by not receiving a reception completion signal as described above from vehicle V. Also, in the example described above, driving controller22causes vehicle V to temporarily drive into and stay at a vacant space or the like when it is determined that a failure has occurred in vehicle communicator23. However, even if it is determined that a failure has not occurred in vehicle V, driving controller22may cause vehicle V to temporarily drive into and stay at a vacant space or the like if a failure occurs in vehicle driving management system100. Then, after the processing in step S109, driving controller22ends the reception failure processing, and prohibits the processing in step S7and the subsequent step shown inFIG.5from being performed. As described above, in the present embodiment, if a failure occurs in the driving data reception function of vehicle control device20, vehicle V is caused to temporarily drive into and stay at a vacant space or the like, and it is therefore possible to prevent vehicle V from becoming an obstacle that interferes with the driving of other vehicles. Next, if it is determined in step S102that a failure has not occurred in vehicle communicator23of vehicle V (No in step S102), driving controller22determines that a failure has occurred in base station (n). Then, driving controller22transmits a notification indicating that a failure has occurred in base station (n) to management device10(step S103). For example, vehicle communicator23transmits a notification signal indicating that a failure has occurred in the driving data transmission function of base station (n) to management device10via base station (n), another base station, or another relay. Next, driving controller22controls driver30to cause vehicle V to drive back to area (n−1) and cause vehicle communicator23to receive driving data (n+1) of area (n+1) that was not received in area (n) (step S104). At this time, driving controller22may cause vehicle communicator23to start receiving driving data (n+1) by causing vehicle communicator23to transmit a request signal as described above to management device10. Then, driving controller22controls driver30to cause vehicle V to start autonomously driving from area (n−1) to area (n) and area (n+1) (step S105). Next, driving controller22causes vehicle communicator23to start receiving driving data (n+2) of area (n+2) transmitted from base station (n+1) while causing vehicle V to autonomously drive in area (n+1) (step S106). At this time as well, driving controller22may cause vehicle communicator23to start receiving driving data (n+2) by causing vehicle communicator23to transmit a request signal as described above to management device10. Then, driving controller22controls driver30and vehicle communicator23to complete the driving of vehicle V in area (n+1) and the reception of driving data (n+2) of area (n+2) (step S107). After that, driving controller22increments variable n by 2 (step S108), and ends the reception failure processing. In the example shown inFIG.6, a notification indicating that a failure has occurred in base station (n) is transmitted before vehicle V drives back to area (n−1), but may be transmitted when vehicle V has driven back to area (n−1). In this case, vehicle communicator23transmits a notification signal indicating that a failure has occurred in the driving data transmission function of base station (n) to management device10via base station (n−1). FIG.7is a flowchart showing an example of an overall processing operation performed by management device10. First, when vehicle V arrives at the parking facility entrance, communication controller12of management device10first initializes variable n to 1 (step S11). Then, driving data generator11generates driving data for each of the areas that are located from the entrance to a parking space that is the target spot (step S12). Then, at the parking facility entrance, management communicator13transmits driving data (n) of area (n) to vehicle V via the base station disposed at the entrance. When driving data (n) has been transmitted, communication controller12transmits, to vehicle control device20of vehicle V, an instruction to start autonomous driving (step S13). The instruction to start autonomous driving is transmitted as a result of, for example, management communicator13transmitting a driving start signal to vehicle control device20. Next, management communicator13starts transmitting driving data (n+1) of area (n+1) to vehicle V that is autonomously driving in area (n) in response to control performed by communication controller12(step S14). Driving data (n+1) is transmitted via base station (n). Communication controller12may cause management communicator13to start transmitting driving data (n+1) when a request signal as described above transmitted from vehicle communicator23has been received by management communicator13. Then, communication controller12determines whether driving data (n+1) of area (n+1) has been transmitted (step S15). If, for example, a reception completion signal described above transmitted from vehicle V is received by management communicator13within a predetermined period of time from the start of transmission of driving data (n+1), communication controller12determines that driving data (n+1) of area (n+1) has been transmitted. Here, if it is determined that driving data (n+1) of area (n+1) has not been transmitted (No in step S15), communication controller12executes transmission failure processing (step S200). On the other hand, if it is determined that driving data (n+1) of area (n+1) has been transmitted (Yes in step S15), communication controller12increments variable n (step S17). Then, after the processing in step S17and step S200, communication controller12determines whether the driving data of a target area has been transmitted (step S18). The target area is an area in which the parking space that is the target spot is located. That is, communication controller12determines whether driving data (n+1) that was determined in step S15as having been transmitted is the driving data of the target area. If it is determined that the driving data of the target area has not been transmitted (No in step S18), communication controller12repeatedly executes the processing from step S14. On the other hand, if it is determined that the driving data of the target area has been transmitted (Yes in step S18), communication controller12ends the transmission of driving data to vehicle V. FIG.8is a flowchart showing an example of the transmission failure processing performed by management device10. That is,FIG.8is a flowchart showing a detailed processing operation performed in step S200shown inFIG.7. First, communication controller12causes management communicator13to transmit driving data (n+1) to vehicle V that has driven back to area (n−1) via base station (n−1) (step S202). Driving data (n+1) is the driving data of area (n+1) that was not transmitted in area (n). Communication controller12may cause management communicator13to start transmitting driving data (n+1) when a request signal as described above transmitted from vehicle communicator23is received by management communicator13. Next, communication controller12causes management communicator13to transmit driving data (n+2) of area (n+2) to vehicle V that is driving in area (n+1) via base station (n+1) (step S203). In step S203as well, communication controller12may cause management communicator13to start transmitting driving data (n+2) when a request signal as described above transmitted from vehicle communicator23is received by management communicator13. Then, communication controller12increments variable n by 2 (step S205), and ends the transmission failure processing. As described above, in the present embodiment, if a failure occurs in base station (n) of area (n), and vehicle control device20cannot receive driving data (n+1) of area (n+1) from base station (n), vehicle V is caused to drive back to area (n−1). Then, vehicle communicator23receives driving data (n+1) of area (n+1) from base station (n−1) of area (n−1). Accordingly, vehicle V can autonomously drive from area (n−1) to the target spot via area (n) and area (n+1). That is, even if a failure or the like occurs, it is possible to cause vehicle V to appropriately drive to the target spot. Also, in the present embodiment, as described above, if a failure occurs in the driving data reception function of vehicle control device20, vehicle V is caused to temporarily drive into and stay at a vacant space or the like, and it is therefore possible to prevent vehicle V from becoming an obstacle that interferes with the driving of other vehicles. Also, when driving data (n+1) is not transmitted from base station (n) while vehicle V is in area (n), management device10according to the present embodiment may cause base station (n−1) of area (n−1) to transmit not only driving data (n) but also driving data (n+1). That is, if a failure occurs in base station (n), management device10causes base station (n−1) to transmit driving data (n) and driving data (n+1) to a following vehicle that is driving from area (n−1) to area (n) and area (n+1) until the failure is fixed. In other words, when the driving data of a second area is not transmitted to vehicle V while vehicle V is driving in a first area, and a following vehicle behind vehicle V is to drive in the first area and the second area via the previous area, communication controller12causes management communicator13to transmit the driving data of the first area and the driving data of the second area to the following vehicle via the base station of the previous area while the following vehicle is driving in the previous area. In the example shown inFIG.4, area (n−1), area (n), and area (n+1) respectively correspond to the previous area, the first area, and the second area. With this configuration, the following vehicle has already received the driving data of the second area when the following vehicle drives in the first area, and thus even if a failure occurs in the base station of the first area, the following vehicle can autonomously drive to the target spot via the first area and the second area without driving back to the previous area. (Variation) In the embodiment given above, when vehicle V cannot receive the driving data of the second area that is next to the first area while driving in the first area, vehicle V drives back to the previous area to receive the driving data of the second area. On the other hand, according to the present variation, vehicle V receives the driving data of a third area instead of the driving data of the second area. The third area is not an area located on the route from the previous area to the target spot via the first area and the second area, but is an area located on a different route from the previous area to the target spot. That is, in the present variation, when vehicle V cannot receive the driving data of the second area, vehicle V receives the driving data of an area located on a different route and autonomously drives to the target spot along the different route. FIG.9is a diagram illustrating a processing operation performed by vehicle driving management system100according to the present variation. Vehicle driving management system100according to the present variation has the same configuration as that of embodiment described above. For example, in the present variation as well, as in the embodiment given above, if a failure occurs in base station (n) of area (n), driving data (n+1) is not transmitted from base station (n) to vehicle V. In this case, in the present variation as well, because driving data (n+1) is not received by vehicle communicator23, driving controller22of vehicle control device20determines whether a failure has occurred in base station (n). If it is determined that a failure has occurred in base station (n), driving controller22controls driver30to cause vehicle V to drive in an opposite direction along the driving path indicated by driving data (n). At this time, vehicle V drives back to area (n−1). At this time, driving controller22detects, based on the result of monitoring performed by surroundings monitor21and driving data (n−1) of area (n−1), that vehicle V has driven back to area (n−1). Then, driving controller22causes vehicle communicator23to transmit a request signal. The request signal is a signal for requesting the driving data of an area next to area (n), and is transmitted to management device10via base station (n−1) of area (n−1). Management communicator13of management device10according to the present variation receives the request signal from vehicle V via base station (n−1). When the request signal is received, communication controller12generates driving data (k) of area (k) that is located on a different route, instead of driving data (n+1) of area (n+1) that is the next area to area (n). Variable k in the parentheses is an integer of 1 or more. That is, driving data (k) of area (k) that is located on a second route that is a route different from a first route from area (n−1) to the target spot via area (n) and area (n+1) is generated. The second route is a route from area (n−1) to the target spot via area (k), area (k+1), area (k+2), and area (n+1). Then, communication controller12causes management communicator13to transmit driving data (k) to vehicle V via base station (n−1). After that, communication controller12causes management communicator13to transmit driving data (k+1) of area (k+1) that is the next area via base station (k) while vehicle V is driving in area (k). Furthermore, communication controller12causes management communicator13to transmit driving data (k+2) of area (k+2) that is the next area via base station (k+1) while vehicle V is driving in area (k+1). Furthermore, communication controller12causes management communicator13to transmit driving data (n+1) of area (n+1) that is the next area via base station (k+2) while vehicle V is driving in area (k+2). Vehicle communicator23of vehicle control device20receives driving data (k) transmitted from management device10via base station (n−1) while vehicle V is driving back to area (n−1). When driving data (k) is received, driving controller22stores driving data (k) in vehicle storage25. Then, driving controller22controls driver30to cause vehicle V to drive into area (k) from area (n−1), and also controls driver30in accordance with driving data (k) to cause vehicle V to drive through area (k). After that, vehicle communicator23receives driving data (k+1) of area (k+1) that is the next area from management device10via base station (k) while vehicle V is driving in area (k). Accordingly, driving controller22controls driver30in accordance with driving data (k+1) to cause vehicle V to drive into area (k+1). Furthermore, vehicle communicator23receives driving data (k+2) of area (k+2) that is the next area from management device10via base station (k+1) while vehicle V is driving in area (k+1). Accordingly, driving controller22controls driver30in accordance with driving data (k+2) to cause vehicle V to drive into area (k+2). Furthermore, vehicle communicator23receives driving data (n+1) of area (n+1) that is the next area from management device10via base station (k+2) while vehicle V is driving in area (k+2). Accordingly, driving controller22controls driver30in accordance with driving data (n+1) to cause vehicle V to drive into area (n+1). In this way, vehicle V parks in a parking space that is the target spot. Communication controller12of management device10may cause management communicator13to also transmit new driving data (n−1) indicating a driving path from area (n−1) to area (k) to vehicle V via base station (n−1) while vehicle V is driving back to area (n−1). With this configuration, driving controller22of vehicle control device20can cause vehicle V to autonomously drive in area (n−1) as appropriate in accordance with new driving data (n−1) so as to drive toward area (k). As described above, in the present variation, in the case where there are two routes: a first route that is a route from the previous area to the target spot via a first area and a second area; and a second route that is a route from the previous area to the target spot via a third area that is different from the first area and the second area, communication controller12of management device10causes management communicator13to transmit the driving data of the third area as the driving data of a different area. On the other hand, in the case where there are two routes: a first route that is a route from the previous area to the target spot via a first area and a second area; and a second route that is a route from the previous area to the target spot via a third area that is different from the first area and the second area, vehicle communicator23of vehicle control device20receives the driving data of the third area as the driving data of a different area. In the example shown inFIG.9, area (n−1), area (n), area (n+1), and area (k) respectively correspond to the previous area, the first area, the second area, and the third area. With this configuration, vehicle V can autonomously drive from the previous area to the target spot along the second route as appropriate. Vehicle control device20according to the present variation performs the same processing operation as the processing operation of the embodiment described above shown inFIG.5, except that the reception failure processing in step S100is performed in a manner different from that of the embodiment described above. FIG.10is a flowchart showing an example of reception failure processing performed by vehicle control device20according to the present variation. First, driving controller22controls driver30to cause vehicle V to temporarily stop (step S101). Then, driving controller22determines whether the cause of not receiving driving data (n+1) is a failure in vehicle V, or in other words, whether a failure has occurred in vehicle communicator23of vehicle V (step S102). Here, if it is determined that a failure has occurred in vehicle communicator23(Yes in step S102), driving controller22executes the processing in step S109in the same manner as in the embodiment described above. On the other hand, if it is determined in step S102that a failure has not occurred in vehicle communicator23of vehicle V (No in step S102), driving controller22determines that a failure has occurred in base station (n). Then, driving controller22transmits a notification indicating that a failure has occurred in base station (n) to management device10(step S103). Next, driving controller22controls driver30to cause vehicle V to drive back to area (n−1) (step S121). Then, vehicle communicator23receives driving data (k) of area (k) transmitted from management device10via base station (n−1) (step S122). At this time, driving controller22may cause vehicle communicator23to transmit a request signal as described above to management device10so as to cause vehicle communicator23to start receiving driving data (k). Furthermore, driving controller22controls driver30to cause vehicle V to start autonomously driving in area (k) (step S123). After that, driving controller22replaces variable k with variable n (step S126), and ends the reception failure processing. In the example shown inFIG.10as well, as in the example shown inFIG.6, a notification indicating that a failure has occurred in base station (n) is transmitted before vehicle V drives back to area (n−1), but may be transmitted when vehicle V has driven back to area (n−1). Management device10according to the present variation performs the same processing operation as the processing operation of the embodiment described above shown inFIG.7, except that the transmission failure processing in step S200is performed in a manner different from that of the embodiment described above. FIG.11is a flowchart showing an example of transmission failure processing performed by management device10according to the present variation. First, communication controller12causes management communicator13to transmit driving data (k) to vehicle V that has driven back to area (n−1) via base station (n−1) (step S222). Driving data (k) is the driving data of area (k) that is located on a different route that was described above. Communication controller12may cause management communicator13to start transmitting driving data (k) when a request signal as described above transmitted from vehicle communicator23is received by management communicator13. Next, communication controller12replaces variable k with variable n (step S224), and ends the transmission failure processing. As described above, in the present variation, when vehicle V cannot receive the driving data of the second area, vehicle V drives back to the previous area to receive the driving data of the third area that is located on a different route. With this configuration, vehicle V can autonomously drive to the target spot along the different route as appropriate. Also, in management device10according to the present variation, when driving data (n+1) is not transmitted from base station (n) while vehicle V is in area (n), after that, driving data (n) of area (n) may not be generated. That is, if a failure occurs in base station (n), management device10transmits the driving data of each area except for area (n) to a following vehicle until the failure is fixed. In other words, when the driving data of a second area is not transmitted to vehicle V while driving in a first area, driving data generator11generates, for a following vehicle behind vehicle V, driving data of at least one area that is located on a route from the parking facility entrance to the target spot without passing through the first area. In the example shown inFIG.9, area (n) and area (n+1) respectively correspond to the first area and the second area. With this configuration, the following vehicle does not drive into the first area, and thus even if a failure occurs in the base station of the first area, the following vehicle can autonomously drive to the target spot via the at least one area as appropriate. (Other Variations) Up to here, the management device, the vehicle control device, and the vehicle driving management system according to one or more aspects of the present disclosure have been described by way of the embodiment and the variation thereof, but the present disclosure is not limited to the embodiment and the variation given above. Other embodiments obtained by making various modifications that can be conceived by a person having ordinary skill in the art to the embodiment and the variation given above without departing from the scope of the present disclosure are also encompassed within the scope of the one or more aspects of the present disclosure. For example, from base station (n−1) of area (n−1), in the embodiment given above, the driving data of the first route such as driving data (n+1) is transmitted. In the variation given above, the driving data of the second route such as driving data (k) is transmitted. Communication controller12of management device10may switch the driving data transmitted from base station (n−1) between the driving data of the first route and the driving data of the second route. For example, communication controller12may be configured to recognize the degree of congestion of vehicles in each area of the parking facility, and switch the driving data transmitted from base station (n−1) according to the degree of congestion. Specifically, when there are more vehicles in the areas included in the second route than in the areas included in the first route, communication controller12switches the driving data transmitted from base station (n−1) to the driving data of the first route. Conversely, when there are more vehicles in the areas included in the first route than in the areas included in the second route, communication controller12switches the driving data transmitted from base station (n−1) to the driving data of the second route. With this configuration, it is possible to cause each vehicle to smoothly arrive at their target spot. Also, the target spot may be changed. For example, the target spot may be changed to a vacant parking space in a different area. Also, in each of the embodiments and variations described above, the structural elements may be configured using dedicated hardware, or may be implemented by executing a software program suitable for the structural elements. The structural elements may be implemented by a program executor such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory. Here, the program for implementing the devices and the like of the embodiments and variations described above causes a computer to execute the steps of the flowchart shown in any one ofFIGS.5to8andFIGS.10and11. The following configurations are also encompassed in the scope of the present disclosure. (1) At least one device described above is, specifically, a computer system that includes a microprocessor, a ROM (Read Only Memory), a RAM (Random Access Memory), a hard disk unit, a display unit, a keyboard, a mouse, and the like. A computer program is stored in the RAM or the hard disk unit. The functions of the at least one device described above are achieved as a result of the microprocessor operating in accordance with the computer program. Here, the computer program is composed of a combination of a plurality of instruction codes that indicate instructions for the computer to achieve predetermined functions. (2) Some or all of the structural elements that constitute at least one device described above may be composed of a single system LSI (Large Scale Integration). The system LSI is a super multifunctional LSI manufactured by integrating a plurality of structural elements on a single chip, and is specifically a computer system that includes a microprocessor, a ROM, a RAM, and the like. A computer program is stored in the RAM. The functions of the system LSI are achieved as a result of the microprocessor operating in accordance with the computer program. (3) Some or all of the structural elements that constitute at least one device described above may be composed of an IC card or a single module that can be attached and detached to and from the device. The IC card or the module is a computer system that includes a microprocessor, a ROM, a RAM, and the like. The IC card or the module may include the above-described super multifunctional LSI. The functions of the IC card or the module are achieved as a result of the microprocessor operating in accordance with a computer program. The IC card or the module may have tamper resistance. (4) The present disclosure may be any of the methods described above. Also, the present disclosure may be a computer program that implements the method by using a computer, or may be a digital signal generated by the computer program. Also, the present disclosure may be implemented by recording the computer program or the digital signal in a computer readable recording medium such as, for example, a flexible disk, a hard disk, a CD (Compact Disc)-ROM, a DVD, a DVD-ROM, a DVD-RAM, a BD (Blu-ray (registered trademark) Disc), a semiconductor memory, or the like. Also, the present disclosure may be a digital signal recorded in the recording medium. Also, the present disclosure may be implemented by transmitting the computer program or the digital signal via a telecommunication line, a wireless or wired communication line, a network as typified by the Internet, data broadcasting, or the like. Also, the present disclosure may be implemented by an independent computer system by transferring the program or the digital signal by recording on a recording medium, or by transferring the program or the digital signal via a network or the like. While an embodiment has been described herein above, it is to be appreciated that various changes in form and detail may be made without departing from the spirit and scope of the present disclosure as presently or hereafter claimed. FURTHER INFORMATION ABOUT TECHNICAL BACKGROUND TO THIS APPLICATION The disclosures of the following patent applications including specification, drawings and claims are incorporated herein by reference in their entirety: Japanese Patent Application No. 2020-215876 filed on Dec. 24, 2020. INDUSTRIAL APPLICABILITY The present disclosure is applicable to, for example, a device and a system that manage the driving of vehicles in an automated valet parking environment, or the like. | 59,115 |
11858531 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Advantages and features of the present invention and methods for achieving them will be apparent with reference to embodiments described below in detail with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below and may be implemented in various forms. Rather, the embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the technical field to which the present invention pertains. The scope of the present invention is only defined by the claims. The terminology used herein is for the purpose of describing embodiments and is not intended to be limiting. As used herein, the singular forms are intended to include the plural forms as well unless the context clearly indicates otherwise. “Comprises” and/or “comprising” used herein do not preclude the presence or addition of one or more elements other than stated elements. Throughout the specification, like reference numerals refer to like elements, and “and/or” includes any and all combinations of listed elements. Although the terms “first,” “second,” etc. may be used to describe various elements, these elements are not limited by these terms. These terms are only used to distinguish one element from another. Accordingly, a first element discussed below may be termed a second element within the technical scope of the present invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by those skilled in the technical field to which the present invention pertains. Also, terms defined in commonly used dictionaries should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The term “unit” or “module” used herein means a software or hardware element, such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and the “unit” or “module” performs certain roles. However, the “unit” or “module” is not limited to software or hardware. The “unit” or “module” may be configured to be present in an addressable storage medium or configured to run on one or more processors. Therefore, as an example, the “unit” or “module” may include elements, such as software elements, object-oriented software elements, class elements, and task elements, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within elements and the “unit” or “module” may be integrated into a smaller number of elements and “units” or “modules” or separated into additional elements and “units” or “modules.” Spatially-relative terms, such as “below,” “beneath,” “lower,” “above,” and “upper,” may be used to easily describe relationships of one element with other elements as shown in the drawings. It will be understood that spatially-relative terms are intended to include different orientations of elements in addition to the orientation shown in the drawings. For example, when an element shown in a drawing is turned over, elements described as “below” or “beneath” another element may be oriented “above” the other element. Therefore, the exemplary term “below” or “beneath” may include both orientations of above and below. An element may be oriented in another direction, and thus the spatially-relative terms may be interpreted according to the orientation of the element. In this specification, a computer means any type of hardware device including at least one processor and may be understood as encompassing a software configuration operating on a corresponding hardware device according to an embodiment. For example, a computer may be understood as including, but not limited to, all of a smartphone, a tablet personal computer (PC), a desktop computer, a laptop computer, and a user client and application running on each of the devices. This specification describes a method of controlling driving and stopping of an autonomous vehicle which is applied to an autonomous vehicle that autonomously travels and stops without a driver's control. However, this is merely an example, and the method is not limited thereto. The method can be applied to vehicles which do not use autonomous driving functions or to which autonomous driving functions are not applied and used in fields of assisting a driver with driving and stopping control. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Although each operation is described herein as being performed by a computer, a subject of each operation is not limited thereto, and at least some of operations may be performed by different devices according to an embodiment. FIG.1is a diagram illustrating a system for controlling stop of an autonomous vehicle using a speed profile according to an exemplary embodiment of the present invention. Referring toFIG.1, the system for controlling stop of an autonomous vehicle using a speed profile according to the exemplary embodiment of the present invention may include a device (or server)100for controlling stop of an autonomous vehicle, a user terminal200, and an external server300. Here, the system for controlling stop of an autonomous vehicle using a speed profile shown inFIG.1is in accordance with the exemplary embodiment, and elements of the system are not limited to the exemplary embodiment shown inFIG.1and may be added, changed, or removed as necessary. In the exemplary embodiment, the device100for controlling stop of an autonomous vehicle may be connected to an autonomous vehicle10or a control module of the autonomous vehicle through a network400and control traveling and stopping of the autonomous vehicle10. For example, the device100for controlling stop of an autonomous vehicle may be provided in the autonomous vehicle10. The device100for controlling stop of an autonomous vehicle may collect surrounding information of the autonomous vehicle10from a sensor module which acquires various pieces of information about the inside and outside of the autonomous vehicle10, determine a control command for the autonomous vehicle10on the basis of the collected surrounding information, and transmit the determined control command to the control module in the autonomous vehicle10such that the control module may control the autonomous vehicle10. Here, the surrounding information of the autonomous vehicle10may include external information including the topography surrounding the autonomous vehicle10, road signs, traffic lights, signal information output by the traffic lights, whether there is an object (e.g., another vehicle, a pedestrian, or an obstacle) adjacent to the autonomous vehicle10, and a location, a posture, and a trajectory of the object and internal information including a current location, posture, speed, and acceleration of the autonomous vehicle10. However, the surrounding information of the autonomous vehicle10is not limited thereto and may further include various pieces of information which are available for determining the control command for the autonomous vehicle10. In various embodiments, when controlling stop of the autonomous vehicle10, the device100for controlling stop of an autonomous vehicle may determine one or more candidate routes for controlling stop of the autonomous vehicle10on the basis of the surrounding information and determine one or more candidate stop locations for each of the one or more candidate routes. Subsequently, the device100for controlling stop of an autonomous vehicle may calculate scores for candidate driving plans for the autonomous vehicle10to travel the one or more candidate routes according to a preset speed profile, finalize a driving plan for the autonomous vehicle10on the basis of the calculated scores, and determine a control command so that the autonomous vehicle10travels (e.g., starts and stops) according to the finalized driving plan. Here, the candidate driving plans may be for the autonomous vehicle10to travel a specific candidate route or travel a specific candidate route and then stop at a candidate stop location on the specific candidate route according to a preset speed profile. However, the candidate driving plans are not limited thereto. In the exemplary embodiment, the user terminal200(e.g., a terminal of a driver or passenger of the autonomous vehicle10) may be connected to the device100for controlling stop of an autonomous vehicle through the network400and may receive various pieces of information which are generated when the device100for controlling stop of an autonomous vehicle performs a method of controlling stop of the autonomous vehicle10using a speed profile. In various embodiments, the user terminal200may include, but not limited to, at least one of a personal computer (PC), a mobile phone, a smart phone, a tablet PC, a notebook computer, a personal digital assistant (PDA), a potable multimedia player (PMP), a ultra-mobile PC (UMPC), and a vehicle infotainment system, which have a display in at least a part thereof. In the exemplary embodiment, the external server300may be connected to the device100for controlling stop of an autonomous vehicle through the network400and may store and manage various pieces of information (e.g., surrounding information), data (e.g., speed profile data and sectional acceleration profile data), and software (e.g., candidate route determination software, candidate stop location determination software, and score calculation software) required for the device100for controlling stop of an autonomous vehicle to perform the method of controlling stop of the autonomous vehicle10using a speed profile. For example, the external server300may be a storage server which is separately provided outside the device100for controlling stop of an autonomous vehicle. In the system for controlling stop of an autonomous vehicle using a speed profile according to the exemplary embodiment of the present invention, various pieces of information, data, and software required for the device100for controlling stop of an autonomous vehicle to perform the method of controlling stop of the autonomous vehicle10using a speed profile are stored in the external server300, but the system for controlling stop of an autonomous vehicle is not limited thereto. The device100for controlling stop of an autonomous vehicle may include a storage device therein and store various pieces of information, data, and software required for performing the method of controlling stop of the autonomous vehicle10using a speed profile in the internal storage device. A hardware configuration of the device100for controlling stop of an autonomous vehicle which may perform the method of controlling stop of the autonomous vehicle10using a speed profile will be described below with reference toFIG.2. FIG.2is a block diagram illustrating a hardware configuration of a device for controlling stop of an autonomous vehicle using a speed profile according to another exemplary embodiment of the present invention. Referring toFIG.2, a device100for controlling stop of an autonomous vehicle (hereinafter, a “computing device100”) according to another exemplary embodiment of the present invention may include at least one processor110, a memory120in which a computer program151executed by the processor110is loaded, a bus130, a communication interface140, and a storage150which stores the computer program151. InFIG.2, only elements related to the exemplary embodiment of the present invention are shown. Accordingly, those skilled in the technical field to which the present invention pertains may understand that general-purpose elements other than those shown inFIG.2may be further included. The processor110controls overall operations of the elements of the computing device100. The processor110may include a central processing unit (CPU), a microprocessor unit (MPU), a microcontroller unit (MCU), a graphics processing unit (GPU), or any type of processor well known in the technical field of the present invention. Also, the processor110may perform computation for at least one application or program for performing a method according to exemplary embodiments of the present invention, and the computing device100may include at least one processor. In various exemplary embodiments, the processor110may include a random access memory (RAM) (not shown) and a read-only memory (ROM) which temporarily or permanently store signals (or data) processed in the processor110. Also, the processor110may be implemented in the form of a system on chip (SoC) including at least one of a graphics processor, a RAM, and a ROM. The memory120stores various pieces of data, commands, and/or information. The memory120may load the computer program151from the storage150to perform a method/operation according to various exemplary embodiments of the present invention. When the computer program151is loaded in the memory120, the processor110may perform the method/operation by executing one or more instructions constituting the computer program151. The memory120may be implemented as a non-volatile memory such as a RAM, but the technical scope of the present invention is not limited thereto. The bus130provides a communication function between the elements of the computing device100. The bus130may be implemented in various forms such as an address bus, a data bus, and a control bus. The communication interface140supports wired and wireless Internet communication of the computing device100. Also, the communication interface140may support various communication methods in addition to Internet communication. To this end, the communication interface140may include a communication module well known in the technical field of the present invention. In some embodiments, the communication interface140may be omitted. The storage150may non-temporarily store the computer program151. When the process of controlling stop of an autonomous vehicle is performed by the computing device100, the storage150may store various pieces of information (e.g., surrounding information of the autonomous vehicle10, a plurality of preset speed profiles, and a preset sectional acceleration profile) required for performing the method of controlling stop of the autonomous vehicle10using a speed profile. The storage150may include a non-volatile memory, such as a ROM, an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), and a flash memory, a hard disk, a detachable disk, or any form of computer-readable recording medium well known in the technical field of the present invention. The computer program151may include one or more instructions which cause the processor110to perform a method/operation according to various exemplary embodiments of the present invention when loaded in the memory120. In other words, the processor110may perform the method/operation according to various exemplary embodiments of the present invention by executing the one or more instructions. In the exemplary embodiment, the computer program151may include one or more instructions for performing the method of controlling stop of an autonomous vehicle using a speed profile, the method including an operation of obtaining surrounding information of an autonomous vehicle, an operation of determining candidate routes for controlling stop of the autonomous vehicle on the basis of the surrounding information, an operation of calculating scores for candidate driving plans for the autonomous vehicle to travel the determined candidate routes according to a preset speed profile, and an operation of finalizing a driving plan for the autonomous vehicle on the basis of the calculated scores. Operations of a method or algorithm described in connection with an exemplary embodiment of the present invention may be directly implemented as hardware, implemented as a software module executed by hardware, or implemented as a combination thereof. The software module may be stored in a RAM, a ROM, an EPROM, an EEPROM, a flash memory, a hard disk, a detachable disk, a compact disc (CD)-ROM, or any type of computer-readable recording medium well known in the technical field of the present invention. Elements of the present invention may be implemented as a program (or an application) and stored in a medium so as to be executed in combination with a computer which is hardware. Elements of the present invention may be executed through software programming or software elements. Similarly, an exemplary embodiment may be implemented with a programming or scripting language, such as C, C++, Java, or assembler, to include various algorithms implemented with a combination of data structures, processes, routines, or other programming elements. Functional aspects may be implemented in an algorithm which is executed by one or more processors. A method of controlling, by the computing device100, stop of the autonomous vehicle10using a speed profile will be described below with reference toFIGS.3to14. FIG.3is a flowchart illustrating a method of controlling stop of an autonomous vehicle using a speed profile according to still another exemplary embodiment of the present invention. Referring toFIG.3, in operation S110, the computing device100may obtain surrounding information of the autonomous vehicle10. For example, the computing device100may collect surrounding information of the autonomous vehicle10from the autonomous vehicle10or a sensor module (e.g., a module including various sensors such as a laser sensor, a light detection and ranging (LiDAR) sensor, a camera sensor, and a location sensor) provided in the user terminal200of a user who is in the autonomous vehicle10. Here, the surrounding information may include external information including the topography surrounding the autonomous vehicle10, road signs (e.g., arrows denoting going straight, turning left, turning right, going straight and then turning left, going straight and then turning right, and no going straight, child protection zone, crosswalk), traffic lights, signal information output by the traffic lights, whether there is an object (e.g., another vehicle, a pedestrian, or an obstacle) adjacent to the autonomous vehicle10, and a location, a posture, and a trajectory of the object and internal information including a current location, posture, speed, and acceleration of the autonomous vehicle10. In various embodiments, the computing device100may collect location information of the autonomous vehicle10from the autonomous vehicle10or the user terminal20of the user who is in the autonomous vehicle10and collect surrounding information of the autonomous vehicle10acquired by a sensor module (e.g., a closed circuit television (CCTV) camera) provided in an area in which the autonomous vehicle10is present on the basis of the collected location information of the autonomous vehicle10. However, a method of collecting surrounding information is not limited thereto, and various methods may be used to collect surrounding information of the autonomous vehicle10. In operation S120, the computing device100may determine a candidate route for controlling stop of the autonomous vehicle10on the basis of the surrounding information of the autonomous vehicle10obtained from the autonomous vehicle10or the user terminal200. In various embodiments, the computing device100may determine all cases in which the autonomous vehicle10may move and travel as candidate routes on the basis of the surrounding information of the autonomous vehicle10. The computing device100may determine a plurality of candidate routes for the autonomous vehicle10. For example, as shown inFIG.4A, when the autonomous vehicle10is traveling in a first lane on a road including two lanes, the computing device100may determine a candidate route (first candidate route)31for continuously traveling in the first lane which is a current lane and candidate routes (second candidate route and third candidate route)32and33for changing the current lane to the second lane and traveling in the second lane. Also, when there is another vehicle21adjacent to the autonomous vehicle10, the computing device100may subdivide the candidate routes32and33for changing the current lane to the second lane and traveling in the second lane into the candidate route (second candidate route)32for changing the lanes and traveling before the other vehicle21passes by and the candidate route (third candidate route)33for changing the lanes and traveling after the other vehicle21passes by. However, this is only one example of determining a candidate route. A method of determining a candidate route for the autonomous vehicle10is not limited thereto, and various methods may be used to determine a candidate route for the autonomous vehicle10. In various embodiments, the computing device100may determine a candidate stop location on each of one or more candidate routes determined for the autonomous vehicle10to apply a preset speed profile (e.g., a first speed profile, a second speed profile, a third speed profile, a fourth speed profile, a seventh speed profile, and an eighth speed profile) to each of the candidate routes. In various embodiments, as a candidate stop location, the computing device100may determine at least one of a location which is spaced a certain distance from a stop line (e.g., a stop line in front of an intersection or a stop line in front of a crosswalk) on the candidate route determined for the autonomous vehicle10, a location which is spaced a certain distance from a location at which an object (e.g., another vehicle, a pedestrian, or an obstacle) has stopped or is predicted to stop on the determined candidate route, and a location input by a driver or a passenger of the autonomous vehicle10. Here, the computing device100may determine a location which is spaced a certain distance from a stop line as a candidate stop location regardless of a signal output by traffic lights adjacent to the stop line, and the candidate stop location indicating the location spaced the certain distance from the stop line may be determined as a candidate location for all the candidate routes. For example, as shown inFIG.4B, the computing device100may determine a location which is spaced a certain distance from a stop line as a first candidate stop location41on a first candidate route31for continuously traveling in the first lane which is a current lane and determine a location spaced the certain distance from a location at which another vehicle22in front of the autonomous vehicle10has stopped or is predicted to stop on the first candidate route31as a second candidate stop location42. Also, the computing device100may determine a location wanted by the driver or passenger of the autonomous vehicle10on the first candidate route31as a third candidate stop location for the first candidate route31. Although not shown inFIG.4B, the computing device100may determine a location which is spaced a certain distance from the stop line as a first candidate stop location for a second candidate route for changing lanes and traveling before another vehicle passes by, determine a location spaced the certain distance from a location at which an object in front of the autonomous vehicle10has stopped or is predicted to stop on the second candidate route as a second candidate stop location, and determine a location wanted by the driver or passenger on the second candidate route as a third candidate stop location. Also, although not shown inFIG.4B, the computing device100may determine a location which is spaced a certain distance from the stop line as a first candidate stop location for a third candidate route for changing lanes and traveling after another vehicle passes by, determine a location spaced the certain distance from a location at which the other vehicle (the vehicle having passed by) in front of the autonomous vehicle10has stopped or is predicted to stop on the third candidate route as a second candidate stop location, and determine a location wanted by the driver or passenger on the third candidate route as a third candidate stop location. Here, the certain distance may be a value which is set in advance by the driver or passenger of the autonomous vehicle10or an operator of the computing device100or automatically calculated such that an event, such as collision, does not occur in the autonomous vehicle10. However, the certain distance is not limited thereto, and various methods may be used to set the certain distance. In operation S130, the computing device100may calculate scores for a plurality of candidate driving plans for the autonomous vehicle10to travel a plurality of candidate routes according to a preset speed profile. Here, a process in which the autonomous vehicle10travels the plurality of candidate routes according to the preset speed profile may mean not only a process of continuously traveling the plurality of candidate routes according to the preset speed profile but also a process of traveling the plurality of candidate routes and then stopping at the candidate stop location determined on each of the plurality of candidate routes according to the preset speed profile. However, the process is not limited thereto. In various embodiments, the computing device100may apply the preset speed profile (e.g., first to eighth speed profiles ofFIGS.6to13) to the autonomous vehicle10and calculate scores for the candidate driving plans (e.g., a case of traveling the candidate routes according to the preset speed profile or a case in which the autonomous vehicle10travels the candidate routes and then stops at the preset candidate stop locations) for the autonomous vehicle10to travel the candidate routes determined in operation S120according to the preset speed profile. In various embodiments, the computing device100may apply the preset speed profile to the autonomous vehicle10. In other words, the computing device100may calculate a score for an operation (e.g., traveling or stopping) of the autonomous vehicle10according to the preset speed profile by applying the preset speed profile to each of the candidate stop locations according to whether the operation (e.g., traveling or stopping) of the autonomous vehicle10satisfies a plurality of preset conditions. Here, the plurality of preset conditions may include, but not limited to, whether a collision is surely prevented, whether sudden deceleration or acceleration is prevented, whether a specific line is not crossed in consideration of traffic lights, yielding to another vehicle, etc., whether a sufficient margin to a definitely expected collision is ensured, whether a potentially anticipated risk is prevented, whether high speed is prevented for comfortable riding and safety in curve driving, whether the autonomous vehicle10complies with the speed limit, and whether the autonomous vehicle10maintains the speed limit of a road on which the autonomous vehicle10is traveling. Also, a certain score may be given to each of the plurality of preset conditions, and the given score may be determined according to the priority of each condition. For example, the plurality of preset conditions may have decreasing priorities in the order listed above, and the condition with a high priority may have a higher score than the condition with a low priority (e.g., 20 points may be given to the condition of whether a collision is surely prevented, and 10 points may be given to the condition of whether the autonomous vehicle10complies with the speed limit). Further, different scores may be given to the conditions according to how much the conditions are satisfied. For example, when 20 points are given to the condition of whether a collision is surely prevented, the degree of satisfaction is subdivided into five levels according to how reliably the collision is prevented (e.g., a first level indicating that a collision will occur, a second level indicating a high probability of collision, a third level indicating a medium probability of collision, a fourth level indicating a low probability of collision, and a fifth level of no probability of collision), and zero, five, ten, fifteen, and twenty points may be given to the subdivided levels. In various embodiments, the computing device100may determine whether the candidate stop locations preset for the autonomous vehicle10correspond to preset no-stopping zones (e.g., locations set as a no-parking or stopping zone such as on a crosswalk, at an intersection, and close to a fire hydrant) and correct the scores calculated for the determined candidate stop locations according to the results of determining whether the candidate stop locations correspond to the preset no-stopping zones. For example, when a first candidate stop location corresponds to the no-stopping zones and a second candidate stop location does not correspond to the no-stopping zones, the computing device100may set scores such that the second candidate stop location is given a higher score than the first candidate stop location. Here, the computing device100may consider a condition with a higher priority (e.g., the condition for collision prevention) rather than consider whether the candidate stop locations preset for the autonomous vehicle10correspond to the preset no-stopping zones and may not consider whether the candidate stop locations preset for the autonomous vehicle10correspond to the preset no-stopping zones or may set a relatively low score for whether the candidate stop locations preset for the autonomous vehicle10correspond to the preset no-stopping zones. However, a method of giving a score to a candidate stop location is not limited thereto. Here, the preset no-stopping zone may be stored in map data (e.g., in the form of polygons) which is generated in advance for autonomous driving of the autonomous vehicle10, and the computing device100may compare coordinate values of a candidate stop location of the autonomous vehicle10with coordinate values of the no-stopping zone in the map data and determine whether the autonomous vehicle10has stopped in the no-stopping zone when the autonomous vehicle10stops at the candidate stop location. The above-described method of giving scores to a plurality of conditions and calculating a score by giving the scores when the conditions are satisfied is merely an example for describing a method of calculating a score. The method of calculating a score is not limited thereto, and various methods may be used to calculate a score for an operation of the autonomous vehicle10when the preset speed profile is applied. In various embodiments, the computing device100may determine priorities for the candidate routes of the autonomous vehicle10according to a driving tendency or a driving style of the driver or passenger of the autonomous vehicle10or a request input in advance by the driver or passenger. In general, all drivers have different driving styles, and thus drivers may prefer different methods even in the process of driving and stopping the vehicle. For example, some people prefer to minimize lane changes even when it takes more time to go to a preset destination, whereas other people may prefer to reach the destination in the shortest time regardless of route. Even with regard to a method of stopping a vehicle at a specific location, some drivers prefer to stop a vehicle at a specific location by reducing a speed with a constant acceleration, whereas other drivers may prefer to rapidly reduce a speed of a vehicle to a certain speed, drive the vehicle to a specific location at a low speed, and then stop the vehicle. To this end, the computing device100may receive such tendency information from the driver or passenger of the autonomous vehicle10in advance, set priorities for the candidate routes according to the received tendency information, and give weights to the candidate routes according to the priorities so that stop of the autonomous vehicle10may be possibly controlled in a way preferred by the driver or passenger. Also, the computing device100may receive such tendency information from the driver or passenger of the autonomous vehicle10in advance, set priorities for speed profiles according to the received tendency information, and give weights to the speed profiles according to the priorities so that stop of the autonomous vehicle10may be possibly controlled in a way preferred by the driver or passenger. However, a method of controlling stop of the autonomous vehicle10is not limited thereto. In various embodiments, the computing device100may calculate scores for candidate driving plans for driving along the candidate routes determined for the autonomous vehicle10using the processor110included therein. When there are a plurality of candidate driving plans whose scores will be calculated because there are a plurality of determined candidate routes or a plurality of candidate stop locations are determined on a determined candidate route, a plurality of different processors may be used to calculate scores for the plurality of candidate driving plans. For example, when candidate routes determined for the autonomous vehicle10are a first candidate route, a second candidate route, and a third candidate route, three processors, a first processor, a second processor, and a third processor, may be used to calculate scores for candidate driving plans for driving along the candidate routes. Also, when a first candidate stop location, a second candidate stop location, and a third candidate stop location are determined on a first candidate route, the computing device100may calculate a score for each of a first candidate driving plan for traveling the first candidate route and then stopping at the first candidate stop location, a second candidate driving plan for traveling the first candidate route and then stopping at the second candidate stop location, and a third candidate driving plan for traveling the first candidate route and then stopping at the third candidate stop location using the three processors, the first processor, the second processor, and the third processor. In various embodiments, the computing device100may set a plurality of processors to calculate scores for a preset number of candidate driving plans regardless of candidate routes and candidate stop locations. In various embodiments, the computing device100may calculate scores for candidate driving plans (e.g., a candidate driving plan for traveling a plurality of candidate routes without stopping or a candidate driving plan for traveling to a candidate stop location on a plurality of candidate routes and a candidate stop location and then stopping) using one processor. When it is determined that the amount of processing exceeds a reference or the time required for completing a score calculation operation exceeds a reference due to a large number of targets (candidate driving plans) for which scores will be calculated, two or more processors may be operated to calculate scores for the plurality of candidate driving plans so that the time required for completing the process can be reduced. The preset speed profile which may be applied to various embodiments will be described below with reference toFIGS.5to13. FIG.5is a set of graphs showing a sectional acceleration profile which may be applied to various embodiments. Referring toFIG.5, in various embodiments, the computing device100may set a sectional acceleration profile according to a sectional linear acceleration method which may be applied to a plurality of speed profiles (e.g.,FIGS.6to13). Here, the sectional acceleration profile may be set to increase or reduce a speed of the autonomous vehicle10from a current speed v0to a target speed vtargetwhen a current acceleration is a0such that the acceleration is changed from a0to zero when the speed becomes the target speed vtarget(e.g., the acceleration is set in a linear form and to become zero at a destination as shown inFIG.5). Here, a maximum acceleration amax, a minimum acceleration amin, a maximum positive (+) jerk (a vector for designating a rate of change of acceleration over time) jmax, and a maximum negative (−) jerk jminmay be separately set in advance. However, the maximum acceleration amax, the minimum acceleration amin, the maximum positive (+) jerk jmax, and the maximum negative (−) jerk jminare not limited thereto. In various embodiments, the acceleration of the autonomous vehicle10set according to the sectional acceleration profile may be set in consideration of all of the maximum acceleration amax, the minimum acceleration amin, the maximum positive (+) jerk jmax, and the maximum negative (−) jerk jmin. As shown inFIG.5, the acceleration of the autonomous vehicle10is increased to and maintained at the maximum acceleration amaxfor a certain time and then reduced to zero, but a change in acceleration is not limited thereto. In some cases, the acceleration of the autonomous vehicle10may be increased to the maximum acceleration amaxand then immediately reduced to zero without being maintained for the certain time (e.g., in a triangle shape having a maximum value of amax) or may be increased to a value smaller than amaxand then immediately reduced to zero without being maintained for the certain time (e.g., in a triangle shape having a maximum value smaller than amax). Also, the acceleration of the autonomous vehicle10may be reduced to the minimum acceleration aminand then immediately increased to zero without being maintained for the certain time (e.g., in an inverted triangle shape having a minimum value of amin) or may be reduced to a value smaller than aminand then immediately increase to zero without being maintained for the certain time (e.g., in an inverted triangle shape having a minimum value smaller than amin). However, acceleration profiles are not limited thereto. In various embodiments, the maximum acceleration amaxand the minimum acceleration aminmay be determined according to the maximum positive (+) jerk (a vector for designating a rate of change of acceleration over time) jmaxand the maximum negative (−) jerk jminwhich are set in advance. However, determinants of the maximum acceleration amaxand the minimum acceleration aminare not limited thereto. The range of acceleration determined by the maximum acceleration amaxand the minimum acceleration aminmay be set and changed without limitations. Preferably, the range of acceleration is set within a certain range (e.g., 1.5 m/s2to 2 m/s2) for comfortable riding of the driver or passenger of the autonomous vehicle10. In various embodiments, the computing device100may separately set values of amax, amin, jmax, and jminfor each of the plurality of speed profiles (e.g., FIGS.6to13) independently of other speed profiles. For example, to separately apply a sectional acceleration profile to the plurality of speed profiles, the computing device100may set values of amax, amin, jmax, and jminin the sectional acceleration profile applied to each of the plurality of speed profiles independently of other speed profiles. In this way, sectional acceleration profiles having the same values of amax, amin, jmax, and jminmay be applied to the plurality of speed profiles, or sectional acceleration profiles having different values of amax, amin, jmax, and jminmay be applied to the plurality of speed profiles. FIG.6is a graph showing a first speed profile (target location trapezoidal stop) which may be applied to various embodiments. Referring toFIG.6, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and then stop at a determined candidate stop location according to the first speed profile by applying the first speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0, a current acceleration is a0, and a distance to the determined candidate stop location (target stop location) is starget(i.e., when values given for the autonomous vehicle10are v0, a0, and starget), the first speed profile may increase or reduce the speed of the autonomous vehicle10from v0to a preset target speed vtarget(independent variable) using the current acceleration a0and a preset sectional acceleration profile (e.g.,FIG.5), maintain the speed of the autonomous vehicle10at vtargetfor a certain period (Buffer) from a time point at which the speed of the autonomous vehicle10becomes vtarget, and reduce the speed of the autonomous vehicle10from vtargetto zero and stop the autonomous vehicle10at the determined candidate stop location using the preset sectional acceleration profile after the certain period. The certain period (Buffer) in the first speed profile may be automatically set so that a distance traveled by the autonomous vehicle10according to the first speed profile becomes stargetwhich is the distance to the determined candidate stop location, but the certain period is not limited thereto. In various embodiments, when amax, amin, jmax, and jminare not set to sufficiently large values according to the preset sectional acceleration profile, the certain period may not be set so that the distance traveled by the autonomous vehicle10according to the first speed profile becomes stargetwhich is the distance to the determined candidate stop location. In this case, the computing device100may exclude a case in which the autonomous vehicle10travels and then stops at the determined candidate stop location according to the first speed profile from targets whose scores will be calculated. In various embodiments, the computing device100may set the preset target speed vtargetto a plurality of different values and calculate scores for different cases in which a plurality of different target speeds vtarget_1, vtarget_2, and vtarget_3(e.g., 30 km/h, 40 km/h, and 50 km/h) are set. FIG.7is a graph showing a second speed profile (target location trapezoidal stop with tail) which may be applied to various embodiments. Referring toFIG.7, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and then stop at a determined candidate stop location according to the second speed profile by applying the second speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0, a current acceleration is a0, and a distance to the determined candidate stop location (target stop location) is starget(i.e., when values given for the autonomous vehicle10are v0, a0, and starget), the second speed profile may increase or reduce the speed of the autonomous vehicle10from v0to a preset target speed vtarget(independent variable) using the current acceleration a0and a preset sectional acceleration profile (e.g.,FIG.5), maintain the speed of the autonomous vehicle10at vtargetfor a first period (Buffer) from a time point at which the speed of the autonomous vehicle10becomes vtarget, reduce the speed of the autonomous vehicle10from vtargetto a last target low speed immediately before stopping vtail(independent variable) using the preset sectional acceleration profile after the first period, maintain the speed of the autonomous vehicle10at vtailfor a second period from a time point at which the speed of the autonomous vehicle10becomes vtail, and reduce the speed of the autonomous vehicle10from vtailto zero and stop the autonomous vehicle10at the determined candidate stop location using the preset sectional acceleration profile after the second period. The first period in the second speed profile may be set such that a distance traveled by the autonomous vehicle10according to the second speed profile becomes a difference between stargetand stail(independent variable), and the second period may be set such that a distance traveled by the autonomous vehicle10according to the second speed profile becomes stail. However, the first period and the second period are not limited thereto. In various embodiments, when amax, amin, jmax, and jminare not set to sufficiently large values according to the preset sectional acceleration profile, the first period may not be set so that the distance traveled by the autonomous vehicle10according to the second speed profile becomes the difference between stargetand stail, or the second period may not be set so that the distance traveled by the autonomous vehicle10according to the second speed profile becomes stail. In this case, the computing device100may exclude a case in which the autonomous vehicle10travels and then stops at the determined candidate stop location according to the second speed profile from targets whose scores will be calculated. In various embodiments, the computing device100may set the preset target speed vtargetto a plurality of different values and calculate scores for different cases in which a plurality of target speeds vtarget_1, vtarget_2, and vtarget_3(e.g., 30 km/h, 40 km/h, and 50 km/h) are set. Also, the computing device100may set the last target low speed immediately before stopping vtailto a plurality of different values and calculate scores for different cases in which a plurality of target low speeds vtail_1, vtail_2, and vtail_3(e.g., 3 km/h, 5 km/h, and 10 km/h) are set. Further, the computing device100may set the distance stailtraveled by the autonomous vehicle10during the second period according to the second speed profile to a plurality of different values and calculate scores for different cases in which a plurality of distances traveled during the second period stail_1, stail_2, and stail_3(e.g., 3 m, 5 m, and 10 m) are set. FIG.8is a graph showing a third speed profile (target location direct stop) which may be applied to various embodiments. Referring toFIG.8, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and then stop at a determined candidate stop location according to the third speed profile by applying the third speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0, a current acceleration is a0, and a distance to the determined candidate stop location (target stop location) is starget(i.e., when values given for the autonomous vehicle10are v0, a0, and starget), the third speed profile may reduce the speed of the autonomous vehicle10from v0to zero using the current acceleration a0, a target acceleration adecelof the autonomous vehicle10, and a preset sectional acceleration profile (e.g.,FIG.5) and stop the autonomous vehicle10at the determined candidate stop location using the preset sectional acceleration profile after the certain period. adecelin the third speed profile may be a negative value such that the speed of the autonomous vehicle10is reduced and may be variably set such that a distance traveled by the autonomous vehicle10becomes starget. However, adecelis not limited thereto. In various embodiments, when amax, amin, jmax, and jminare not set to sufficiently large values according to the preset sectional acceleration profile, adecelmay not be variably set so that the distance traveled by the autonomous vehicle10according to the third speed profile does not become starget. In this case, the computing device100may exclude a case in which the autonomous vehicle10travels and then stops at the determined candidate stop location according to the third speed profile from targets whose scores will be calculated. In various embodiments, the computing device100may determine adecelso that the distance traveled by the autonomous vehicle10according to the third speed profile becomes starget, and in some cases (e.g., to satisfy a condition with a high priority such as whether a collision is prevented), adecelmay be determined without considering the minimum acceleration amin. However, a method of determining adecelis not limited thereto. FIG.9is a graph showing a fourth speed profile (target location direct stop with tail) which may be applied to various embodiments. Referring toFIG.9, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and then stop at a determined candidate stop location according to the fourth speed profile by applying the fourth speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0, a current acceleration is a0, and a distance to the determined candidate stop location (target stop location) is starget(i.e., when values given for the autonomous vehicle10are v0, a0, and starget), the fourth speed profile may reduce the speed of the autonomous vehicle10from v0to a last target low speed immediately before stopping vtail(independent variable) using the current acceleration a0, a target acceleration adecel(independent variable) of the autonomous vehicle10, and a preset sectional acceleration profile (e.g.,FIG.5), maintain the speed of the autonomous vehicle10at vtailfor a certain period (Buffer) from a time point at which the speed of the autonomous vehicle10becomes vtail, and reduce the speed of the autonomous vehicle10from vtailto zero and stop the autonomous vehicle10at the determined candidate stop location using the preset sectional acceleration profile after the certain period. The certain period may be set such that a distance traveled by the autonomous vehicle10from a time point at which the speed of the autonomous vehicle10becomes a difference value stailbetween stargetand a distance stravel,ramptraveled by the autonomous vehicle10until the speed of the autonomous vehicle reaches vtail. However, the certain period is not limited thereto. In various embodiments, when amax, amin, jmax, and jminare not set to sufficiently large values according to the preset sectional acceleration profile, adecelmay not be set so that a distance traveled by the autonomous vehicle10until the speed of the autonomous vehicle10reaches vtailbecomes stravel,rampor the certain period may not be set so that a distance traveled by the autonomous vehicle10from a time point at which the speed of the autonomous vehicle10becomes vtailbecomes stail. In this case, the computing device100may exclude a case in which the autonomous vehicle10travels and then stops at the determined candidate stop location according to the fourth speed profile from targets whose scores will be calculated. In various embodiments, the computing device100may set the last target low speed immediately before stopping vtailto a plurality of different values and calculate scores for different cases in which a plurality of target low speeds vtail_1, vtail_2, and vtail_3(e.g., 3 km/h, 5 km/h, and 10 km/h) are set. Also, the computing device100may set adecelto a plurality of different values so that the distance traveled by the autonomous vehicle10until the speed of the autonomous vehicle10reaches vtailbecomes stravel,rampand may calculate scores for different cases in which adecelis set to the plurality of acceleration values. FIG.10is a graph showing a fifth speed profile (target speed achievement) which may be applied to various embodiments. Referring toFIG.10, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route according to the fifth speed profile by applying the fifth speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0and a current acceleration is a0(i.e., when values given for the autonomous vehicle10are v0and a0), the fifth speed profile may increase or reduce the speed of the autonomous vehicle10from v0to a target speed vtarget(independent variable) using the current acceleration a0and a preset sectional acceleration profile (e.g.,FIG.5) and cause the autonomous vehicle10to travel while maintaining the speed of the autonomous vehicle at vtarget. However, the fifth speed profile is not limited thereto. In various embodiments, the computing device100may set the preset target speed vtargetto a plurality of different values and calculate scores for different cases in which a plurality of different target speeds vtarget_1, vtarget_2, and vtarget_3(e.g., 30 km/h, 40 km/h, and 50 km/h) are set. FIG.11is a graph showing a sixth speed profile (target location target speed achievement) which may be applied to various embodiments. Referring toFIG.11, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route according to the sixth speed profile by applying the sixth speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0, a current acceleration is a0, and a distance to a location at which a target speed vtarget(independent variable) of the autonomous vehicle10will be achieved is starget(i.e., when values given for the autonomous vehicle10are v0, a0, and starget), the sixth speed profile may increase or reduce the speed of the autonomous vehicle10from v0to vtargetusing the current acceleration a0, a target acceleration aadjustof the autonomous vehicle10, and a preset sectional acceleration profile (e.g.,FIG.5) and cause the autonomous vehicle10to travel while maintaining the speed of the autonomous vehicle at vtarget. aadjustin the sixth speed profile may be set such that a distance traveled by the autonomous vehicle10until the speed of the autonomous vehicle10reaches vtargetbecomes starget. However, aadjustis not limited thereto. In various embodiments, when amax, amin, jmax, and jminare not set to sufficiently large values according to the preset sectional acceleration profile, aadjustmay not be set so that the distance traveled by the autonomous vehicle10until the speed of the autonomous vehicle10reaches vtargetbecome starget. In this case, the computing device100may exclude a case in which the autonomous vehicle10travels and then stops at a determined candidate stop location according to the sixth speed profile from targets whose scores will be calculated. In various embodiments, the computing device100may set the preset target speed vtargetto a plurality of different values and calculate scores for different cases in which a plurality of different target speeds vtarget_1, vtarget_2, and vtarget_3(e.g., 30 km/h, 40 km/h, and 50 km/h) are set. FIG.12is a graph showing a seventh speed profile (smooth stop) which may be applied to various embodiments. Referring toFIG.12, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and stop according to the seventh speed profile by applying the seventh speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0and a current acceleration is a0(i.e., when values given for the autonomous vehicle10are v0and a0), the seventh speed profile may reduce the speed of the autonomous vehicle10from v0to zero and stop the autonomous vehicle10using the current acceleration a0, a target acceleration atarget(independent variable) of the autonomous vehicle10, and a preset sectional acceleration profile (e.g.,FIG.5). Unlike the third speed profile, the seventh speed profile may simply stop the autonomous vehicle10by reducing the speed using the target acceleration atargethaving a negative (−) value regardless of a determined candidate stop location. However, the seventh speed profile is not limited thereto. In various embodiments, the computing device100may set the target acceleration atargetto a plurality of different values and calculate scores for different cases in which a plurality of different target accelerations atarget_1, atarget_2, and atarget_3(e.g., −1 m/s2, −2 m/s2, and −3 m/s2) are set. FIG.13is a graph showing an eighth speed profile (emergency stop) which may be applied to various embodiments. Referring toFIG.13, in various embodiments, the computing device100may calculate a score for a candidate driving plan for the autonomous vehicle10to travel a candidate route and stop according to the eighth speed profile by applying the eighth speed profile to the autonomous vehicle10. When a current speed of the autonomous vehicle10is v0(i.e., when a value given for the autonomous vehicle10is v0), the eighth speed profile may reduce the speed of the autonomous vehicle10from v0to zero and stop the autonomous vehicle10using a preset acceleration aemergency. aemergencyin the eighth speed profile may be a value preset without considering a current acceleration a0of the autonomous vehicle10and a preset sectional acceleration profile (e.g.,FIG.5). It has been described above that the computing device100calculates a candidate stop location and a score for a driving plan to the candidate stop location by applying the first speed profile to the eighth speed profile (e.g.,FIGS.6to13). However, speed profiles are not limited thereto, and various speed profiles may be used in addition to the above-described speed profiles in consideration of a driving tendency of the driver and the like. Referring back toFIG.3, in operation S140, the computing device100may finalize a driving plan for the autonomous vehicle10on the basis of the scores calculated for the plurality of candidate driving plans. The scores calculated for the plurality of candidate driving plans may not only be scores for evaluating how appropriate it is for the autonomous vehicle10to travel each candidate route without stopping or how appropriate it is for the autonomous vehicle10to travel each candidate route and then stop at a predetermined candidate stop location but also scores for evaluating whether the predetermined candidate stop location is appropriate for stopping. Therefore, the computing device100may finalize a stop location for the autonomous vehicle10on the basis of the calculated scores and finalize a driving plan including a driving method (e.g., a speed profile), which represents how to travel and stop at the finalized location, at the same time. For example, when a candidate route and a speed profile with the calculated highest scores are the first candidate route and the second speed profile as shown inFIG.14, the computing device100may finalize a driving plan for the autonomous vehicle10to travel to a candidate stop location on the first candidate route and stop at the candidate stop location according to the second speed profile. In various embodiments, when scores for the plurality of candidate driving plans are calculated using a plurality of different processors, the computing device100may collect the scores for the plurality of candidate driving plans calculated by the plurality of different processors, select the candidate driving plan with the highest score by comparing the collected scores for the plurality of candidate driving plans, and finalize a driving plan representing a route and a stop location of the autonomous vehicle10and a driving method (e.g., a speed profile) to the stop location using the selected candidate driving plan. In various embodiments, when the candidate driving plan with the highest score is a driving plan for continuously traveling without stopping, the computing device100may not finalize a stop location or may determine that a stop location does “not exist.” Subsequently, the computing device100may determine a control command for controlling the autonomous vehicle10according to the finalized driving plan including the stop location of the autonomous vehicle10and the driving method to the stop location and provide the determined control command to the autonomous vehicle10(or the control module provided in the autonomous vehicle10). In various embodiments, the computing device100may provide information on the finalized driving plan including the stop location of the autonomous vehicle10and the driving method to the stop location to an external autonomous vehicle control server, receive a control command from the autonomous vehicle control server in response to the information on the finalized driving plan including the stop location of the autonomous vehicle10and the driving method to the stop location, and transmit the received control command to the autonomous vehicle10(or the control module provided in the autonomous vehicle10). In various embodiments, the computing device100may receive a target stop location of the autonomous vehicle10from the user (e.g., receives the target stop location from the driver or passenger through the user terminal200separately provided in the autonomous vehicle10), transmit information on the received target stop location to a server (e.g., an office or situation room server for remotely controlling the autonomous vehicle10), and obtain a control command from the server. After scores are calculated for the target stop location and the driving plan including the target stop location and the driving method to the target stop location on the basis of a preset speed profile, the control command may be determined according to the calculated scores. Subsequently, the computing device100may control the autonomous vehicle10according to the driving plan such that the autonomous vehicle10may stop at the target stop location according to the control command or may provide the control command to the autonomous vehicle10(or the control module provided in the autonomous vehicle10) such that the autonomous vehicle10may be controlled. In various embodiments, the computing device100may provide guide information of the driving plan finalized through the above procedure. For example, the computing device100may provide information on the driving plan including information on the finalized route, the finalized stop location, and the finalized driving method through a display provided in the autonomous vehicle10such that the driver or passenger of the autonomous vehicle10may be aware of the finalized stop location and how to drive to the finalized stop location. Also, the computing device100may provide information on the driving plan including the finalized route, the stop location, and the driving method to the stop location to another vehicle adjacent to the autonomous vehicle10through vehicle-to-vehicle communication and thereby prevent a driver or passenger of the other vehicle adjacent to the autonomous vehicle10from suffering from inconvenience or prevent the occurrence of a dangerous situation. In addition, the computing device100may display the finalized stop location on a road on which the autonomous vehicle10is traveling through a location display module (e.g., a laser pointer) provided in the autonomous vehicle10and thereby providing a guide such that the driver or passenger of the other vehicle adjacent to the autonomous vehicle10may recognize where the autonomous vehicle10will stop. However, a method of providing the guide to the stop location is not limited thereto. In various embodiments, the computing device100may control the autonomous vehicle10according to a driving plan including a finalized route, stop location, and driving method to the stop location. In other words, the computing device100may repeatedly perform a process (operation S110to operation S140) of setting a driving plan including a route, a stop location, and a driving method to the stop location for the autonomous vehicle10every time the location of the autonomous vehicle10is changed by a certain distance or every certain time (e.g., 50 ms) and continuously update a driving plan including a finalized route, stop location, and driving method to the stop location on the basis of a result of the process. In this way, even in a road environment in which surroundings rapidly change, it is possible to set an optimized route, a stop location, and a driving method to the stop location. According to various embodiments of the present invention, a stop location of an autonomous vehicle is determined in consideration of surrounding information of the autonomous vehicle. Here, scores for a plurality of candidate routes and candidate stop locations on the plurality of candidate routes are calculated using a plurality of preset speed profiles, and a stop location of the autonomous vehicle is determined on the basis of the calculated scores. Accordingly, it is possible to determine an optimal stop location at which the autonomous vehicle will stop. Effects of the present invention are not limited to that described above, and other effects which have not been described above will be clearly understood by those of ordinary skill in the art from the detailed description. The method of controlling stop of an autonomous vehicle using a speed profile has been described above with reference to the flowchart shown in the drawing. For brief description, the method of controlling stop of an autonomous vehicle using a speed profile has been illustrated in a series of blocks. However, the present invention is not limited to the sequence of the blocks, and some blocks may be performed at the same time or in a different order than that described herein. Also, a new block which has not been described herein or illustrated in the drawing may be added, or some blocks may be omitted or changed. Although the exemplary embodiments of the present invention have been described with reference to the accompanying drawings, those skilled in the technical field to which the present invention pertains will appreciate that the present invention may be embodied in other specific forms without changing the technical spirit or essential characteristics thereof. Therefore, the above-described embodiments are to be construed as illustrative and not restrictive in all aspects. | 66,290 |
11858532 | DETAILED DESCRIPTION Various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and embodiments are for illustrative purposes and are not intended to limit the scope of the various aspects or the claims. Various embodiments provide methods executed by a processor of a semi-autonomous vehicle for recognizing vehicle-control gestures by a passenger intended for steering and speed control of an autonomous vehicle. The processor may select a passenger profile from a plurality of passenger profiles to normalize vehicle-control gestures received from the passenger. The processor may determine a vehicle action, by applying the passenger profile or using generalized parameters that may apply to anyone, to a detected vehicle-control gesture performed by the passenger, and confirm that the detected vehicle-control gesture is safe for the vehicle to perform. The processor may execute a safe vehicle command in a safe manner regardless of the emphasis used by the passenger when making the vehicle-control gesture. Various embodiments include a vehicle system configured to recognize gestures, made by a passenger, for controlling the semi-autonomous vehicle. The vehicle system may include sensors, including one or more cameras, and sensor processing capabilities (e.g., motion and/or image processing) to recognize certain passenger motions (i.e., gestures) as vehicle-control gestures made by a passenger. For example, a vehicle occupant swiping her hand in mid-air from left-to-right, pretending to turn a wheel to the right, pointing toward the right side of the vehicle, and/or giving a predefined other hand signal/movement may be designated and recognized as a vehicle-control gesture for turning right. Once the sensor processing capabilities recognize a designated vehicle-control gesture, a translation of that recognized vehicle-control gesture, representing a vehicle action, may be processed by a vehicle control unit for controlling the operation of the vehicle. The process of recognizing vehicle-control gestures may consider historical records of passenger inputs. The historical records may serve as the basis for one or more passenger profiles, which reflect characteristics of how a passenger actually executes a given vehicle-control gesture, including consistent deviations from a model gesture. Each passenger profile may be specific to a particular passenger, associated with one or more particular seats within the vehicle (e.g., the traditional driver's seat), or may be generalized for any passenger and/or any seating position. Passenger profiles may be used to confirm that a detected gesture is consistent with the passenger's typical or prior inputs. Also, the passenger profiles may be used to better recognize valid inputs and reduce the chance of an erroneous gesture detection. As used herein, the term “passenger” refers to an occupant of a vehicle, particularly a semi-autonomous vehicle, which may include one or more drivers and one or more other occupants of the vehicle. As used herein, the term “passenger profile” refers to a set of data reflecting significant features of the way one or more passengers makes vehicle-control gestures or a particular vehicle-control gesture. As used herein, the term “gesture” refers to a movement or pose using a part of a body, especially a hand and/or arm, to express an idea or meaning More particularly, the term “vehicle-control gesture” refers to a movement or pose that is configured to control one or more actions of a semi-autonomous or autonomous vehicle. As used herein, the terms “safe” or “safety” are used synonymously and refer to conditions/circumstances that are protected from or not significantly exposed to danger. In various embodiments, conditions/circumstances may be considered safe when damage or injury to one or more passengers, the vehicle, and/or parts of the vehicle has a probability of occurring that is less than a threshold. While operating vehicles may always involve some level or risk, a predetermined low level of risk (i.e., the safety threshold) may be considered safe (i.e., acceptable). In addition, a processor may calculate separate probabilities of occurrence of damage or injury to more than one thing (e.g., passengers, the vehicle, and/or parts of the vehicle), as well as more than one type of damage or injury to those things. Also, the same or different safety thresholds may be applied to the calculation of risk for each of those things. As used herein, the term “computing device” refers to an electronic device equipped with at least a processor, communication systems, and memory configured with a contact database. For example, computing devices may include any one or all of cellular telephones, smartphones, portable computing devices, personal or mobile multi-media players, laptop computers, tablet computers, 2-in-1 laptop/table computers, smartbooks, ultrabooks, palmtop computers, wireless electronic mail receivers, multimedia Internet-enabled cellular telephones, wearable devices including smart watches, entertainment devices (e.g., wireless gaming controllers, music and video players, satellite radios, etc.), and similar electronic devices that include a memory, wireless communication components and a programmable processor. In various embodiments, computing devices may be configured with memory and/or storage. Additionally, computing devices referred to in various example embodiments may be coupled to or include wired or wireless communication capabilities implementing various embodiments, such as network transceiver(s) and antenna(s) configured to communicate with wireless communication networks. The term “system-on-chip” (SOC) is used herein to refer to a set of interconnected electronic circuits typically, but not exclusively, including one or more processors, a memory, and a communication interface. The SOC may include a variety of different types of processors and processor cores, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), a sub-system processor, an auxiliary processor, a single-core processor, and a multicore processor. The SOC may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), a configuration and status register (CSR), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, registers, performance monitoring hardware, watchdog hardware, counters, and time references. SOCs may be integrated circuits (ICs) configured such that the components of the ICs reside on the same substrate, such as a single piece of semiconductor material (e.g., silicon, etc.). Autonomous and semi-autonomous vehicles, such as cars and, trucks, buses, etc., are becoming a reality on roads. Also, autonomous and semi-autonomous vehicles typically include a plurality of sensors, including cameras, radar, and lidar, that collect information about the environment surrounding the vehicle. For example, such collected information may enable the vehicle to recognize the roadway, identify objects to avoid, and track the movement and future position of other vehicles to enable partial or full autonomous navigation. In accordance with various embodiments, the use of such sensors may be extended to the inside of the vehicle to detect vehicle-control gestures for passengers to control actions of the vehicle without the use of a traditional steering wheel, joystick, or similar direct mechanical control. Various embodiments may be implemented within a variety of semi-autonomous vehicles that include a gesture detection system, of which an example vehicle gesture detection system50is illustrated inFIGS.1A and1B. With reference toFIGS.1A and1B, a vehicle gesture detection system50may include one or more sensor(s)101, a profile database142, and a gesture recognition engine144, which are coupled to and/or work with a control unit140of a vehicle100. The vehicle gesture detection system50may be configured to detect predefined gestures61,62performed by passengers11,12within the vehicle100. By using gestures, one or more passengers may provide a control input to a processor of the vehicle100without having to physically touch a steering wheel or other mechanical controller. The sensor(s)101may include motion sensors, proximity sensors, cameras, microphones, impact sensors, radar, lidar, satellite geo-positing system receivers, tire pressure sensors, and the like, which may be configured to capture images or other characteristics of motions and/or poses performed by passengers. The motions and/or poses may include all motions and/or poses, including both predefined gestures associated with one or more vehicle-control gestures, as well as other movements or poses not associated with vehicle controls. In this way, the sensor(s)101need not discriminate between types of movements or poses, but may convey data regarding the captured images or other characteristics to the gesture recognition engine144for analysis therein. The profile database142may maintain a plurality of passenger profiles customized to normalize vehicle-control gestures for specific passengers, groups of passengers, and/or passengers in specific seats within the vehicle100. The profile database142may be configured to convey data regarding one or more of the plurality of passenger profiles to the gesture recognition engine144for analysis therein. A passenger profile may include data relating to how a specific person makes gestures and/or particular gestures that person has elected to use. Thus, the profile database142may include different passenger profiles designated for different people. Each specific person may be identified based on an input from the passenger or recognition of the identity of the passenger through the sensor(s)101. In this way, the input from the passenger may provide an identity and/or a personalized passenger profile, which may be added or updated in the profile database142. The input from the passenger may come from a user interface associated with the vehicle100, a personal computing device of the passenger (e.g., smartphone), and/or another input device. Thus, the passenger profile may be selected based on the identity of the passenger and/or may be specific to a unique individual. Also, the passenger profile may be customized to the identified passenger from training inputs previously receiving from the passenger. Further, more than one passenger profile may be associated with a single unique individual. Thus, a person may select a particular passenger profile to suit their mood (e.g., tired, agitated, excited, etc.), the time of day, and/or other criteria. Additionally or alternatively, the plurality of passenger profiles may be customized to normalize vehicle-control gestures for passengers in a particular seat or seats within the vehicle. For example, one passenger profile may be designated for any individual sitting in a designated “driver's seat” and one or more other passenger profiles designated for any individual sitting in other seats. As a further alternative, one passenger profile may be designated for all passengers, regardless of who they are or where they sit. Similarly, the profile database142may maintain one or more default passenger profiles for use if the passenger is not identified or if no customized passenger profile has been set up for that passenger. In some implementations, the gesture recognition engine144may use neural network processing and/or artificial intelligence methods to determine whether the movements and/or poses captured by the sensor(s)101match predefined gestures associated with vehicle control. In addition, the gesture recognition engine144may use the data from the passenger profile received from the profile database142to determine which vehicle action was intended by the motion(s) and/or pose(s) captured by the sensor(s)101. Thus, considering the movements or poses captured by the sensor(s)101and the passenger profile conveyed from the profile database142, the gesture recognition engine144may convey, to the control unit140, an instruction or data for the control unit140to operate the vehicle100in a way that implements a designated vehicle action. In various implementations, before initiating any vehicle action(s), the control unit140may ensure the designated vehicle action is safe for the vehicle100and occupants (e.g., passengers11,12). Some vehicle actions can vary in amount or magnitude of change, such as acceleration, deceleration, and lane changes. In some implementations, in addition to matching the captured movements and/or poses to predefined gestures associated with vehicle controls, the gesture recognition engine144may also assess measurable parameters (e.g., angle, distance, etc.) of the captured movements to interpret the intended vehicle control input. For ease of description, rotational angle or degree, distance or sweep of the gesture, speed of movement of the gesture, acceleration during the gesture and other measurable parameters of an observed gesture are referred to herein as the “extent” of the captured movements. The gesture recognition engine144may interpret the extent of a captured movement to determine a magnitude of the vehicle movement intended by the person making the gesture. For example, a passenger could swipe her hand through the air in a 15-degree arch to convey a command to change lanes, but only one lane, in the direction of the swipe. A similar swipe spanning 45-degrees or more may signify a command to change two lanes in the direction of the swipe. In this way, the extent of the captured movements and/or poses may correspond to the amount or magnitude of the resulting vehicle action. In some embodiments, the extent of the captured movements and/or poses that is considered by the gesture recognition engine144may vary by type of action. For example, the extent (e.g., degree or amount of movement) in a gesture for a vehicle turn may be different than degree or amount of movement for a gesture that makes the vehicle stop or slow down. In some embodiments, the interpretation of passenger movements as vehicle control gestures, and particularly the interpretation of the extent of detected gestures for determining intended vehicle actions or commands may vary from passenger to passenger, and may be saved in passenger profiles or historical data, reflected in training data, etc. For example, the extent of a gesture may vary with the size of the individual. Also, the extent of a gesture may vary with personality as some people may be more expressive or exaggerate gestures, while others may make small gestures. In some embodiments, the amount of vehicle redirection or speed change corresponding to a passenger gesture may depend, linearly or non-linearly, on the extent of the captured movements. For example, a 5-degree movement of a finger, hand or arm may cause the vehicle make (i.e., correspond to) a single-lane change, a 15-degree movement may correspond to a two-lane change (i.e., since this is a more drastic maneuver), and a 60-degree movement may correspond to a three-lane change (i.e., even more drastic). In some embodiments, the determined extent of the captured movements may be capped (i.e., not interpreted to extend beyond a maximum degree, distance or speed) for purposes of determining a corresponding command vehicle redirection or speed change. In some embodiments, such a cap or maximum imposed on interpreting always, under certain circumstances, or for only certain types of gestures). For example, while 5-degree, 10-degree, and 15-degree movements may correspond to a one, two, and three-lane change, respectively, the gesture recognition engine144may not interpret a 20-degree turn indicate a four lane change if the lane change gesture is capped to 15-degree movements. In some embodiments in which the extent of captured movements and/or poses is taken into account, measured movements may be rounded (down/up) into particular extent increments appropriate for a particular gesture. For example, if a 5-degree hand movement is associated with a single-lane change and a 10-degree hand movement is associated with a two-lane change, then a 7-degree hand movement may be associated with only the single-lane change. In some embodiments, the safer choice of the single-lane change or the two-lane change will be selected. Alternatively or additionally, the passenger may provide some added indication that a partial lane change is intended (i.e., so the vehicle ends up straddling two lanes), such as to indicate that the 7-degree hand movement should be interpreted as a one and a half lane change. In some embodiments, passenger inputs other than movements may be interpreted as vehicle-control gestures by the gesture recognition engine144, either as alternatives or in addition to movement gestures. For example, the gesture recognition engine144may recognize (but is not limited to) voice commands, touch controls on any touch-sensitive surface (e.g., on a steering wheel, touch-screen, arm rest, seat belt, etc.), and/or remote controller inputs, such as inputs on a mobile device app, gamepad controller, a detachable steering wheel, or other remote controller that can be passed to whichever passenger wants to input a command. In some embodiments, the gestures or other commands may be received from a party outside of the vehicle (e.g., via radio module172). Thus, the gesture recognition engine144may recognize such remote commands for controlling the vehicle. Alternatively, a device (e.g., a wireless communication device190) may be configured with a gesture recognition engine144for recognizing gestures or commands for controlling the vehicle and then sending corresponding commands to the vehicle for execution. For example, a vehicle owner (who is remote from the vehicle) can provide a gesture or command to cause the vehicle to change lanes, change speed, etc. As another example, a passenger may exit the vehicle with a detachable steering wheel (or other input device) to control the vehicle from outside the vehicle (e.g., to provide a different view while parking in a tight space). In some embodiments, a passenger may perform a sequence of vehicle-control gestures for the vehicle to execute a series of corresponding maneuvers. In some embodiments, the gesture recognition engine144may indicate (e.g., with a tone, light, vibration, etc.) that a gesture has been recognized, and the passenger may wait for one gesture to be acknowledged or executed before performing the next gesture. In some embodiments, the gesture recognition engine144or the control unit140may cause the vehicle to perform the series of maneuvers in the order that they were presented. Alternatively, the gesture recognition engine144or the control unit140may cause the vehicle to perform each maneuver when each maneuver becomes safe to do or after receiving further passenger input (e.g., following a prompt). Additionally, or as a further alternative, the gesture recognition engine144may prompt the user before performing a vehicle action, such as one of a series of actions, if before executing the vehicle action, the vehicle action becomes unsafe to execute. For example, if a passenger performs a series of vehicle-control gestures designed to slow down and then make a turn, the vehicle control unit140may not perform the turn if after slowing down the vehicle is blocked from making the turn. With reference toFIGS.1A and1B, the first passenger11is illustrated as performing a first vehicle-control gesture61and the second passenger12is illustrated as performing a second vehicle-control gesture62, respectively. The first vehicle-control gesture61involves the first passenger11moving or swiping an open-palm hand in the air from left to right, following an arched angle of movement. In this instance, the sensor(s)101detect a thirty-degree (30°) swipe from left to right (i.e., “30° Swipe Right”). The second vehicle-control gesture62involves the second passenger12moving or swiping a finger-pointing hand in the air from left to right, following an arched angle of movement. In contrast to the first vehicle-control gesture61, which uses an open hand and only extends across a30-degree angle, the second vehicle-control gesture62uses a partially closed hand with a pointing finger that extends across a 90-degree angle. However, the gesture recognition engine144, after normalizing each of the first and second vehicle-control gestures61,62using the first and second passenger profiles A, B, may determine that both passengers11,12want the vehicle to change lanes to the right. Thus, in both instances, the gesture recognition engine144may output the same vehicle action to the control unit140to change lanes; to the right lane. The gesture recognition engine144may output many different types and degrees of vehicle action to the control unit140for operating the vehicle100to implement a vehicle action associated with the recognized gesture61,62. Various embodiments may be implemented within a variety of vehicles, an example vehicle100of which is illustrated inFIGS.2A and2B. With reference toFIGS.2A and2B, the vehicle100may include a control unit140and a plurality of sensors. InFIGS.2A and2B, the plurality of sensors described generally with regard toFIG.1are described separately. In particular, the plurality of sensors may include occupancy sensors102,104,106,108,110(e.g., motion and/or proximity sensors), cameras112,114, microphones116,118, impact sensors120, radar122, lidar124, satellite geo-positioning system receivers126, tire pressure sensors128. The plurality of sensors102-128, disposed in or on the vehicle, may be used for various purposes, such as vehicle gesture detection and/or autonomous and semi-autonomous navigation and control, crash avoidance, position determination, vehicle-control gesture detection, etc., as well providing sensor data regarding objects and people in or near the vehicle100. The sensors102-128may include one or more of a wide variety of sensors capable of detecting a variety of information useful for receiving input about environments inside and outside the vehicle100, as well as navigation control and collision avoidance. Each of the sensors102-128may be in wired or wireless communication with the control unit140, as well as with each other. In particular, the sensors102-128may include one or more cameras112,114, which include or consist of other optical, photo optic, and/or motion sensors. The sensors102-128may further include other types of object detection and ranging sensors, such as radar122, lidar124, IR sensors, and ultrasonic sensors. The sensors102-128may further include tire pressure sensors128, humidity sensors, temperature sensors, satellite geo-positioning system receivers126, accelerometers, vibration sensors, gyroscopes, gravimeters, impact sensors120, force meters, stress meters, strain sensors, microphones116,118, occupancy sensors102,104,106,108,110, which may include motion and/or proximity sensors, and a variety of environmental sensors. The vehicle control unit140may be configured to operate the vehicle100based on vehicle actions interpreted from vehicle-control gestures by one or more passengers11,12,13and a passenger profile in accordance with various embodiments. Additionally, the control unit140may have a default setting for one or more passenger profiles. For example, based on the currently loaded passenger profile, the default setting may cause the vehicle to operate more or less smoothly, efficiently, quickly, slowly, etc. The default setting may be followed, for example, when the control unit140does not recognize the passenger (i.e., no passenger profile match). Alternatively, when the control unit140does not recognize the passenger, a new profile may be created for that unknown person, for example, based on that person's actions (e.g., gestures/poses). The vehicle control unit140may be configured with processor-executable instructions to perform various embodiments using information received from various sensors, particularly the cameras112,114, microphones116,118, occupancy sensors102,104,106,108,110. In some embodiments, the control unit140may supplement the processing of camera images using distance and relative position (e.g., relative bearing angle) that may be obtained from radar122, lidar124, and/or other sensors. The control unit140may further be configured to control the direction, braking, speed, acceleration/deceleration, and/or the like of the vehicle100when operating in an autonomous or semi-autonomous mode using information regarding other vehicles and/or vehicle-control gestures from passengers determined using various embodiments. FIG.3is a component block diagram illustrating a system300of components and support systems suitable for implementing various embodiments. With reference toFIGS.1A-3, the vehicle100may include the control unit140, which may include various circuits and devices used to control the operation of the vehicle100. In the example illustrated inFIG.3, the control unit140includes a processor164, memory166, an input module168, an output module170and a radio module172. The control unit140may be coupled to and configured to control drive control components154, navigation components156, and one or more sensors101of the vehicle100. As used herein, the terms “component,” “system,” “unit,” “module,” and the like include a computer-related entity, such as, but not limited to, hardware, firmware, a combination of hardware and software, software, or software in execution, which are configured to perform particular operations or functions. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a communication device and the communication device may be referred to as a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one processor or core and/or distributed between two or more processors or cores. In addition, these components may execute from various non-transitory computer readable media having various instructions and/or data structures stored thereon. Components may communicate by way of local and/or remote processes, function or procedure calls, electronic signals, data packets, memory read/writes, and other known computer, processor, and/or process related communication methodologies. The control unit140may include a processor164that may be configured with processor-executable instructions to determine vehicle-control gestures and/or alternatives thereto, and control maneuvering, navigation, and/or other operations of the vehicle100, including operations of various embodiments. The processor164may be coupled to the memory166. The control unit162may include the input module168, the output module170, and the radio module172. The radio module172may be configured for wireless communication. The radio module172may exchange signals182(e.g., command signals for controlling maneuvering, signals from navigation facilities, etc.) with a communication network180. The radio module172may provide the signals182to the processor164and/or the navigation components156. In some embodiments, the radio module172may enable the vehicle100to communicate with a wireless communication device190through a wireless communication link187. The wireless communication link187may be a bidirectional or unidirectional communication link. The wireless communication link187may use one or more communication protocols. In some embodiments, the radio module172may enable the vehicle100to communicate with another vehicle through a wireless communication link192. The wireless communication link192may be a bidirectional or unidirectional communication link and the wireless communication link192may use one or more communication protocols. The input module168may receive sensor data from one or more vehicle sensors101as well as electronic signals from other components, including the drive control components154and the navigation components156. The output module170may be used to communicate with or activate various components of the vehicle100, including the drive control components154, the navigation components156, and the sensor(s)101. The control unit140may be coupled to the drive control components154to control physical elements of the vehicle100related to maneuvering and navigation of the vehicle, such as (but not limited to) the engine, motors, throttles, directing elements, flight control elements, braking or deceleration elements, and the like. The drive control components154may also or alternatively include components that control other devices of the vehicle, including (but not limited to) environmental controls (e.g., air conditioning and heating), external and/or interior lighting, interior and/or exterior informational displays (which may include a display screen or other devices to display information), safety devices (e.g., haptic devices, audible alarms, etc.), and other similar devices. The control unit140may be coupled to the navigation components156and may receive data from the navigation components156and be configured to use such data to determine the present position and orientation of the vehicle100. In various embodiments, the navigation components156may include or be coupled to a global navigation satellite system (GNSS) receiver system (e.g., one or more Global Positioning System (GPS) receivers) enabling the vehicle100to determine its current position using GNSS signals. Alternatively, or in addition, the navigation components156may include radio navigation receivers for receiving navigation beacons or other signals from radio nodes, such as Wi-Fi access points, cellular network sites, radio station, remote computing devices, other vehicles, etc. Through control of the drive control components154, the processor164may control the vehicle100to navigate and maneuver. The processor164and/or the navigation components156may be configured to communicate with a server184on a network186(e.g., the Internet) using a wireless connection signal182with a cellular data communication network180to receive commands to control maneuvering, receive data useful in navigation, provide real-time position reports, and assess other data. The control unit140may be coupled to the one or more sensors101, which may be configured to provide a variety of data to the processor164. While the control unit140is described as including separate components, in some embodiments some or all of the components (e.g., the processor164, the memory166, the input module168, the output module170, and the radio module172) may be integrated in a single device or module, such as a system-on-chip (SOC) processing device. Such an SOC processing device may be configured for use in vehicles and be configured, such as with processor-executable instructions executing in the processor164, to perform operations of various embodiments when installed into a vehicle. FIG.4Aillustrates an example of subsystems, computational elements, computing devices or units within a vehicle management system400, which may be utilized within a vehicle (e.g.,100). With reference toFIGS.1A-4A, in some embodiments, the various computational elements, computing devices or units within vehicle management system400may be implemented within a system of interconnected computing devices (i.e., subsystems), that communicate data and commands to each other (e.g., indicated by the arrows inFIG.4A). In other embodiments, the various computational elements, computing devices or units within vehicle management system400may be implemented within a single computing device, such as separate threads, processes, algorithms or computational elements. Therefore, each subsystem/computational element illustrated inFIG.4Ais also generally referred to herein as “layer” within a computational “stack” that constitutes the vehicle management system400. However, the use of the terms layer and stack in describing various embodiments are not intended to imply or require that the corresponding functionality is implemented within a single autonomous (or semi-autonomous) vehicle management system computing device, although that is a potential embodiment. Rather the use of the term “layer” is intended to encompass subsystems with independent processors, computational elements (e.g., threads, algorithms, subroutines, etc.) running in one or more computing devices, and combinations of subsystems and computational elements. In various embodiments, the vehicle management system400may include a radar perception layer402, a camera perception layer404, a positioning engine layer406, a map fusion and arbitration layer408, a route planning layer410, sensor fusion and road world model (RWM) management layer412, motion planning and control layer414, and behavioral planning and prediction layer416. The layers402-416are merely examples of some layers in one example configuration of the vehicle management system400. In other configurations consistent with various embodiments, one or more other layers may be included, such as (but not limited to) additional layers for other perception sensors (e.g., camera perception layer, etc.), additional layers for planning and/or control, additional layers for modeling, etc., and/or certain of the layers402-416may be excluded from the vehicle management system400. Each of the layers402-416may exchange data, computational results, and commands as illustrated by the arrows inFIG.4A. Further, the vehicle management system400may receive and process data from sensors (e.g., radar, lidar, cameras, inertial measurement units (IMU) etc.), navigation systems (e.g., GPS receivers, IMUs, etc.), vehicle networks (e.g., Controller Area Network (CAN) bus), and databases in memory (e.g., digital map data). The vehicle management system400may output vehicle control commands or signals to the drive-by-wire (DBW) system/control unit420, which is a system, subsystem, or computing device that interfaces directly with the vehicle direction, throttle, and brake controls. The configuration of the vehicle management system400and DBW system/control unit420illustrated inFIG.4Ais merely an example configuration and other configurations of a vehicle management system and other vehicle components may be used in the various embodiments. As an example, the configuration of the vehicle management system400and DBW system/control unit420illustrated inFIG.4Amay be used in a vehicle configured for autonomous or semi-autonomous operation while a different configuration may be used in a non-autonomous vehicle. The radar perception layer402may receive data from one or more detection and ranging sensors, such as radar (e.g.,122) and/or lidar (e.g.,124). The radar perception layer402may process the data to recognize and determine locations of other vehicles and objects within a vicinity of the vehicle100. In some implementations, the radar perception layer402may include the use of neural network processing and artificial intelligence methods to recognize objects and vehicles. The radar perception layer402may pass such information on to the sensor fusion and RWM management layer412. The camera perception layer404may receive data from one or more cameras, such as cameras (e.g.,112,114), and process the data to detect vehicle-control gestures, as well as recognize and determine locations of other vehicles and objects within a vicinity of the vehicle100. In some implementations, the camera perception layer404may include use of neural network processing and artificial intelligence methods to recognize objects and vehicles, and pass such information on to the sensor fusion and RWM management layer412. The positioning engine layer406may receive data from various sensors and process the data to determine a position of the vehicle100. The various sensors may include, but is not limited to, GPS sensor, an IMU, and/or other sensors connected via a CAN bus. The positioning engine layer406may also utilize inputs from one or more cameras, such as cameras (e.g.,112,114) and/or any other available sensor, such as radar (e.g.,122), lidar (e.g.,124), etc. The map fusion and arbitration layer408may access data within a high definition (HD) map database and receive output received from the positioning engine layer406and process the data to further determine the position of the vehicle within the map, such as location within a lane of traffic, position within a street map, etc. The HD map database may be stored in a memory (e.g., memory166). For example, the map fusion and arbitration layer408may convert latitude and longitude information from GPS into locations within a surface map of roads contained in the HD map database. GPS position fixes include errors, so the map fusion and arbitration layer408may function to determine a best-guess location of the vehicle within a roadway based upon an arbitration between the GPS coordinates and the HD map data. For example, while GPS coordinates may place the vehicle near the middle of a two-lane road in the HD map, the map fusion and arbitration layer408may determine from the direction of travel that the vehicle is most likely aligned with the travel lane consistent with the direction of travel. The map fusion and arbitration layer408may pass map-based location information to the sensor fusion and RWM management layer412. The route planning layer410may utilize the HD map, as well as inputs from an operator or dispatcher to plan a route to be followed by the vehicle to a particular destination. The route planning layer410may pass map-based location information to the sensor fusion and RWM management layer412. However, the use of a prior map by other layers, such as the sensor fusion and RWM management layer412, etc., is not required. For example, other stacks may operate and/or control the vehicle based on perceptual data alone without a provided map, constructing lanes, boundaries, and the notion of a local map as perceptual data is received. The sensor fusion and RWM management layer412may receive data and outputs produced by the radar perception layer402, camera perception layer404, map fusion and arbitration layer408, and route planning layer410, and use some or all of such inputs to estimate or refine the location and state of the vehicle in relation to the road, other vehicles on the road, and other objects within a vicinity of the vehicle. For example, the sensor fusion and RWM management layer412may combine imagery data from the camera perception layer404with arbitrated map location information from the map fusion and arbitration layer408to refine the determined position of the vehicle within a lane of traffic. As another example, the sensor fusion and RWM management layer412may combine object recognition and imagery data from the camera perception layer404with object detection and ranging data from the radar perception layer402to determine and refine the relative position of other vehicles and objects in the vicinity of the vehicle. As another example, the sensor fusion and RWM management layer412may receive information from vehicle-to-vehicle (V2V) communications (such as via the CAN bus) regarding other vehicle positions and directions of travel, and combine that information with information from the radar perception layer402and the camera perception layer404to refine the locations and motions of other vehicles. The sensor fusion and RWM management layer412may output refined location and state information of the vehicle, as well as refined location and state information of other vehicles and objects in the vicinity of the vehicle, to the motion planning and control layer414and/or the behavior planning and prediction layer416. As a further example, the sensor fusion and RWM management layer412may use dynamic traffic control instructions directing the vehicle to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information. The sensor fusion and RWM management layer412may output the refined location and state information of the vehicle, as well as refined location and state information of other vehicles and objects in the vicinity of the vehicle, to the motion planning and control layer414, the behavior planning and prediction layer416and/or devices remote from the vehicle, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc. As a still further example, the sensor fusion and RWM management layer412may monitor perception data from various sensors, such as perception data from a radar perception layer402, camera perception layer404, other perception layer, etc., and/or data from one or more sensors themselves to analyze conditions in the vehicle sensor data. The sensor fusion and RWM management layer412may be configured to detect conditions in the sensor data, such as sensor measurements being at, above, or below a threshold, certain types of sensor measurements occurring, etc., and may output the sensor data as part of the refined location and state information of the vehicle provided to the behavior planning and prediction layer416and/or devices remote from the vehicle, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc. The refined location and state information may include vehicle descriptors associated with the vehicle and the vehicle owner and/or operator, such as: vehicle specifications (e.g., size, weight, color, on board sensor types, etc.); vehicle position, speed, acceleration, direction of travel, attitude, orientation, destination, fuel/power level(s), and other state information; vehicle emergency status (e.g., is the vehicle an emergency vehicle or private individual in an emergency); vehicle restrictions (e.g., heavy/wide load, turning restrictions, high occupancy vehicle (HOV) authorization, etc.); capabilities (e.g., all-wheel drive, four-wheel drive, snow tires, chains, connection types supported, on board sensor operating statuses, on board sensor resolution levels, etc.) of the vehicle; equipment problems (e.g., low tire pressure, weak brakes, sensor outages, etc.); owner/operator travel preferences (e.g., preferred lane, roads, routes, and/or destinations, preference to avoid tolls or highways, preference for the fastest route, etc.); permissions to provide sensor data to a data agency server (e.g.,184); and/or owner/operator identification information. The behavioral planning and prediction layer416of the autonomous vehicle management system400may use the refined location and state information of the vehicle and location and state information of other vehicles and objects output from the sensor fusion and RWM management layer412to predict future behaviors of other vehicles and/or objects. For example, the behavioral planning and prediction layer416may use such information to predict future relative positions of other vehicles in the vicinity of the vehicle based on own vehicle position and velocity and other vehicle positions and velocity. Such predictions may take into account information from the HD map and route planning to anticipate changes in relative vehicle positions as host and other vehicles follow the roadway. The behavioral planning and prediction layer416may output other vehicle and object behavior and location predictions to the motion planning and control layer414. Additionally, the behavior planning and prediction layer416may use object behavior in combination with location predictions to plan and generate control signals for controlling the motion of the vehicle. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the behavior planning and prediction layer416may determine that the vehicle needs to change lanes and accelerate, such as to maintain or achieve minimum spacing from other vehicles, and/or prepare for a turn or exit. As a result, the behavior planning and prediction layer416may calculate or otherwise determine a steering angle for the wheels and a change to the throttle setting to be commanded to the motion planning and control layer414and DBW system/control unit420along with such various parameters necessary to effectuate such a lane change and acceleration. One such parameter may be a computed steering wheel command angle. The motion planning and control layer414may receive data and information outputs from the sensor fusion and RWM management layer412and other vehicle and object behavior as well as location predictions from the behavior planning and prediction layer416. The motion planning and control layer414may use (at least some of) this information to plan and generate control signals for controlling the motion of the vehicle and to verify that such control signals meet safety requirements for the vehicle. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the motion planning and control layer414may verify and pass various control commands or instructions to the DBW system/control unit420. The DBW system/control unit420may receive the commands or instructions from the motion planning and control layer414and translate such information into mechanical control signals for controlling wheel angle, brake and throttle of the vehicle. For example, DBW system/control unit420may respond to the computed steering wheel command angle by sending corresponding control signals to the steering wheel controller. In various embodiments, the vehicle management system400may include functionality that performs safety checks or oversight of various commands, planning or other decisions of various layers that could impact vehicle and occupant safety. Such safety checks or oversight functionality may be implemented within a dedicated layer or distributed among various layers and included as part of the functionality. In some embodiments, a variety of safety parameters may be stored in memory and the safety checks or oversight functionality may compare a determined value (e.g., relative spacing to a nearby vehicle, distance from the roadway centerline, etc.) to corresponding safety parameter(s), and issue a warning or command if the safety parameter is or will be violated. For example, a safety or oversight function in the behavior planning and prediction layer416(or in a separate layer) may determine the current or future separate distance between another vehicle (as defined by the sensor fusion and RWM management layer412) and the vehicle (e.g., based on the world model refined by the sensor fusion and RWM management layer412), compare that separation distance to a safe separation distance parameter stored in memory, and issue instructions to the motion planning and control layer414to speed up, slow down or turn if the current or predicted separation distance violates the safe separation distance parameter. As another example, safety or oversight functionality in the motion planning and control layer414(or a separate layer) may compare a determined or commanded steering wheel command angle to a safe wheel angle limit or parameter, and issue an override command and/or alarm in response to the commanded angle exceeding the safe wheel angle limit. According to various implementations, some safety parameters stored in memory may be static (i.e., unchanging over time), such as maximum vehicle speed. For example, operating a vehicle over 120 mph (or some other value that may be set by the manufacturer, vehicle owner, etc.) may be considered dangerous and therefore the vehicle (e.g., gesture recognition engine144and/or control unit140) would impose maximum safety parameters related to speed. With autonomous vehicles, operating at speeds in excess of 120 mph may still be safe, thus a higher value could be used. Alternatively, some safety parameters (e.g., max speed) may be dynamic and change depending on location, driving conditions (e.g., traffic volume), and/or external inputs (e.g., signals from an intelligent transportation system). For example, a maximum speed may have one value for driving in densely populated areas or more confined roadways (i.e., city driving) and another value for driving in less populated areas or roadways with many lanes (i.e., highway driving). Other safety parameters, such as proximity to objects (e.g., other vehicles), people, creatures, or other elements, may be used. In addition, other safety parameters stored in memory may be dynamic in that the parameters are determined or updated continuously or periodically based on vehicle state information and/or environmental conditions. Non-limiting examples of safety parameters may include maximum speed, minimum speed, maximum deceleration (e.g., braking speed), maximum acceleration, and maximum wheel angle limit Any or all of these may be a function of occupant(s), roadway, and/or weather conditions. For example, if an occupant is sleeping, inebriated, or distracted (e.g., reading, not facing/looking forward, etc.) that occupant may be less likely to give an appropriate command and/or not be able to properly navigate the vehicle as compared to other times when the occupant is alert and not distracted for instance. FIG.4Billustrates an example of subsystems, computational elements, computing devices or units within a vehicle management system450, which may be utilized within a vehicle (e.g.,100). With reference toFIGS.1A-4B, in some embodiments, the layers402,404,406,408,410,412, and416of the vehicle management system400may be similar to those described with reference toFIG.4A. The vehicle management system450may operate similar to the vehicle management system400, except that the vehicle management system450may pass various data or instructions to a vehicle safety and crash avoidance system452rather than the DBW system/control unit420. For example, the configuration of the vehicle management system450and the vehicle safety and crash avoidance system452illustrated inFIG.4Bmay be used in a non-autonomous, semi-autonomous, or fully autonomous vehicle. In addition, the functions of the vehicle management system450and/or the vehicle safety and crash avoidance system452may be reduced or disabled (e.g., turned off). In various embodiments, the behavioral planning and prediction layer416and/or sensor fusion and RWM management layer412may output data to the vehicle safety and crash avoidance system452. For example, the sensor fusion and RWM management layer412may output sensor data as part of refined location and state information of the vehicle100provided to the vehicle safety and crash avoidance system452. The vehicle safety and crash avoidance system452may use the refined location and state information of the vehicle100to make safety determinations relative to the vehicle100and/or occupants of the vehicle100. As another example, the behavioral planning and prediction layer416may output behavior models and/or predictions related to the motion of other vehicles to the vehicle safety and crash avoidance system452. The vehicle safety and crash avoidance system452may use the behavior models and/or predictions related to the motion of other vehicles to make safety determinations relative to the vehicle100and/or occupants (e.g.,11,12,13) of the vehicle100. In various embodiments, the vehicle safety and crash avoidance system452may include functionality that performs safety checks or oversight of various commands, planning, or other decisions of various layers, as well as human driver actions and/or vehicle-control gestures, that could impact vehicle and occupant safety. In some embodiments, a variety of safety parameters may be stored in memory and the vehicle safety and crash avoidance system452may compare a determined value (e.g., relative spacing to a nearby vehicle, distance from the roadway centerline, etc.) to corresponding safety parameter(s), and issue a warning, command, or a safe alternative vehicle action, if the safety parameter is or will be violated. For example, a vehicle safety and crash avoidance system452may determine the current or future separation distance between another vehicle (as defined by the sensor fusion and RWM management layer412) and the vehicle (e.g., based on the road world model refined by the sensor fusion and RWM management layer412), compare that separation distance to a safe separation distance parameter stored in memory, and issue instructions and/or propose a safe alternative vehicle action to a driver for speeding up, slowing down, or turning if the current or predicted separation distance violates the safe separation distance parameter. As another example, a vehicle safety and crash avoidance system452may compare a determined first vehicle action (e.g., by applying a first passenger profile to a detected first vehicle-control gesture performed by a first passenger) to a safe vehicle action limit or parameter, and issue an override command and/or alarm in response to the proposed vehicle action exceeding the safe vehicle action limit or parameter. FIG.5illustrates an example system-on-chip (SOC) architecture of a processing device SOC500suitable for implementing various embodiments in vehicles. With reference toFIGS.1A-5, the processing device SOC500may include a number of heterogeneous processors, such as a digital signal processor (DSP)503, a modem processor504, an image and object recognition processor506, a mobile display processor507, an applications processor508, and a resource and power management (RPM) processor517. The processing device SOC500may also include one or more coprocessors510(e.g., vector co-processor) connected to one or more of the heterogeneous processors503,504,506,507,508,517. Each of the processors may include one or more cores, and an independent/internal clock. Each processor/core may perform operations independent of the other processors/cores. For example, the processing device SOC500may include a processor that executes a first type of operating system (e.g., FreeBSD, LINUX, OS X, etc.) and a processor that executes a second type of operating system (e.g., Microsoft Windows). In some embodiments, the applications processor508may be the SOC's500main processor, central processing unit (CPU), microprocessor unit (MPU), arithmetic logic unit (ALU), graphics processing unit (GPU), etc. The processing device SOC500may include analog circuitry and custom circuitry514for managing sensor data, analog-to-digital conversions, wireless data transmissions, and for performing other specialized operations, such as processing encoded audio and video signals for rendering in a web browser. The processing device SOC500may further include system components and resources516, such as voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similar components used to support the processors and software clients (e.g., a web browser) running on a computing device. The processing device SOC500also include specialized circuitry for camera actuation and management (CAM)505that includes, provides, controls and/or manages the operations of one or more cameras (e.g.,101,112,114; a primary camera, webcam, 3D camera, etc.), the video display data from camera firmware, image processing, video preprocessing, video front-end (VFE), in-line JPEG, high definition video codec, etc. The CAM505may be an independent processing unit and/or include an independent or internal clock. In some embodiments, the image and object recognition processor506may be configured with processor-executable instructions and/or specialized hardware configured to perform image processing and object recognition analyses involved in various embodiments. For example, the image and object recognition processor506may be configured to perform the operations of processing images received from cameras (e.g.,136) via the CAM505to recognize and/or identify vehicle-control gestures, other vehicles, and otherwise perform functions of the camera perception layer404as described. In some embodiments, the processor506may be configured to process radar or lidar data and perform functions of the radar perception layer402as described. The system components and resources516, analog and custom circuitry514, and/or CAM505may include circuitry to interface with peripheral devices, such as cameras136, radar122, lidar124, electronic displays, wireless communication devices, external memory chips, etc. The processors503,504,506,507,508may be interconnected to one or more memory elements512, system components and resources516, analog and custom circuitry514, CAM505, and RPM processor517via an interconnection/bus module524, which may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high-performance networks-on chip (NoCs). The processing device SOC500may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as a clock518and a voltage regulator520. Resources external to the SOC (e.g., clock518, voltage regulator520) may be shared by two or more of the internal SOC processors/cores (e.g., the DSP503, the modem processor504, the image and object recognition processor506, the MDP, the applications processor508, etc.). In some embodiments, the processing device SOC500may be included in a control unit (e.g.,140) for use in a vehicle (e.g.,100). The control unit may include communication links for communication with a communication network (e.g.,180), the Internet, and/or a network server (e.g.,184) as described. The processing device SOC500may also include additional hardware and/or software components that are suitable for collecting sensor data from sensors, including motion sensors (e.g., accelerometers and gyroscopes of an IMU), user interface elements (e.g., input buttons, touch screen display, etc.), microphone arrays, sensors for monitoring physical conditions (e.g., location, direction, motion, orientation, vibration, pressure, etc.), cameras, compasses, GPS receivers, communications circuitry (e.g., Bluetooth®, WLAN, WiFi, etc.), and other well-known components of modern electronic devices. FIG.6shows a component block diagram illustrating a system600configured for collaboratively operating a vehicle (e.g.,100) based on vehicle-control gestures by a passenger in accordance with various embodiments. In some embodiments, the system600may include one or more vehicle computing systems602and one or more other vehicle computing system other vehicle computing systems604communicating via a wireless network. With reference toFIGS.1A-6, the vehicle computing system(s)602may include a processor (e.g.,164), a processing device (e.g.,500), and/or a control unit (e.g.,140) (variously referred to as a “processor”) of a vehicle (e.g.,100). The other vehicle computing system(s)604may include a processor (e.g.,164), a processing device (e.g.,500), and/or a control unit (e.g.,140) (variously referred to as a “processor”) of a vehicle (e.g.,100). The vehicle computing system(s)602may be configured by machine-executable instructions606. Machine-executable instructions606may include one or more instruction modules. The instruction modules may include computer program modules. The instruction modules may include (but is not limited to) one or more of a passenger identification module607, passenger profile determination module608, vehicle-control gesture determination module610, vehicle action determination module612, passenger profile reception module616, vehicle action safety determination module618, alternative vehicle action determination module620, delay period assessment module622, unusual operation determination module624, added indication determination module626, unusual operation safety assessment module628, vehicle operation module629, and/or other instruction modules. The passenger identification module607may be configured to identify a passenger (i.e., one or more occupants of the vehicle). In some embodiments, the passenger identification module607may be configured to identify a passenger based on (but not limited to) at least one of a position of the passenger in the vehicle, an input by the passenger, or recognition of the passenger. For instance, the input by the passenger and/or the recognition of the passenger may be determined from occupants through their portable computing device (e.g., a smartphone) or from identification using sensors (e.g.,101), such as (but not limited to) using biosensor(s), and/or facial recognition systems. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, other vehicle computing system(s)604, external resources630, one or more sensor(s) (e.g.,101), and a profile database (e.g.,142) to identify passengers or location in which passengers are seated. The passenger profile determination module608may be configured to determine one or more profiles that should be applied to recognized gestures performed by a passenger. The determination of the appropriate passenger profile will more effectively normalize passenger gestures for translation to an executable vehicle action. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, a profile database (e.g.,142), and passenger identification information to determine which passenger profile to use based on current conditions. The profile database and/or the passenger profile determination module608may maintain a plurality of passenger profiles that include a passenger profile for designated individuals. Some or all of the plurality of passenger profiles may normalize vehicle-control gestures differently as individuals may making a given control gesture may move their fingers, hands and/or arms with different speeds, through different angles and with differences in motions. Thus, a second may normalize vehicle-control gestures differently than a first passenger profile. Some implementations may not employ the passenger profile determination module608and more directly apply a predetermined passenger profile. Alternatively, the use of the passenger profile determination module608may be selectively turned on or off as needed or desired, which may be decided manually or automatically based on circumstances/conditions. According to various embodiments, in circumstances in which more than one passenger occupies the vehicle, the passenger profile determination module608may also be configured to determine one or more passengers considered to be designated as the driver(s), thus accepting vehicle-control gestures only from the designated driver(s). In this way, the passenger profile determination module608may ignore gestures from one or more passengers not designed to be the driver(s). In further embodiments, if more than one driver is designated as a driver (e.g., in a student driver situation or a driver education application), the passenger profile determination module608may have a way of determining a hierarchy between them, in case conflicting vehicle-control gestures are detected. Using sensors (e.g.,101) the passenger profile determination module608may determine that the vehicle has multiple occupants and also determine who or which occupant(s) is/are in charge (i.e., the designated driver(s)). In some embodiments, a passenger occupying what is traditionally the driver's seat (or other pre-set location of the vehicle) may be selected as the designated driver by default, unless an override is received. Alternatively, both front seat occupants may be designated drivers (since they tend to have a good view of the roadway). In particular embodiments, in such cases, vehicle-control gestures from the driver's seat passenger may override vehicle-control gestures from the other front seat passenger. In some embodiments, the designated driver may be chosen after an input from the occupants (e.g., from an associated mobile device or a direct input into the vehicle). In some embodiments, the input about the designated driver may also be received automatically from occupants, such as through the passenger identification module607. In some embodiments, when determining the designated driver through identification, the passenger profile determination module608may automatically apply a hierarchy. For example, the owner, most common, or most recent designated driver may have top priority as the designated driver. Similarly, a hierarchical list may be programmed or user-defined (e.g., (e.g., dad>mom>oldest child, or dad and mom>oldest child). In other embodiments, there may be no hierarchy, with vehicle-control gestures accepted from all or some occupants. In some implementations in which there is a designated driver or hierarchy for receiving commands among occupants, the non-designated drivers or lower hierarchy occupants may be allowed to input select vehicle-control gestures, such as but not limited to non-navigational commands and/or commands that do not risk the safety of the vehicle or occupants (e.g., controlling cabin temperature, entertainment system volume). Alternatively, vehicle-control gestures from the non-designated drivers or lower hierarchy occupants may be accepted, but with higher safety parameters employed (e.g., greater distance required between vehicles, lower maximum speed, etc.) or lower magnitudes in the vehicle's inputs (e.g., limiting to only single-lane changes, increasing/decreasing speed to only 5-mph increments, etc.). For example, the vehicle-control gesture determination module610may recognize gestures from a “back seat driver” limited to reducing the speed of the vehicle or increasing vehicle separation distances, as such commands would be safe and may make such occupants feel safer, but all other vehicle maneuver controls may not be recognized. Thus, the non-designated drivers or lower hierarchy occupants may be allowed a less extend of control as compared to the designated driver or higher priority designated driver. In some embodiments, limits on the recognized gestures of non-designated drivers or lower hierarchy occupants may be overridden in some circumstances. In some embodiments, the vehicle-control gesture determination module610may recognize an override gesture or circumstances in which gestures by a non-designated driver should be recognized and implemented, such as to accommodate circumstances in which the designated driver becomes incapacitated. For example, if the vehicle-control gesture determination module610detects that the designated driver has fallen asleep or is slumped over (apparently passed out) and another passenger is making recognized vehicle control gestures, the vehicle may implement such gestures. In some embodiments, priority among designated drivers or a selection of a new designated driver may occur in response to a trigger event, such as automatically. One non-limiting example of such a trigger event may be detection of a change in behavior of the current designated driver (or designated driver with highest priority) one or more sensors and/or by the vehicle control system, such as due to fatigue, distraction, inebriation, and/or the like. For example, a camera system tracking the designated driver's eyes may detect eyelid droop from fatigue, that the driver is not watching the road due to distraction, or the designated driver's eyes are moving slowly due to inebriation. Another non-limiting example of such a trigger event may be detection of a command (e.g., verbal command or command gesture) by the current designated driver (or designated driver with highest priority), other occupant, or a remote party (e.g., owner of the vehicle) received by a wireless communication link. Thus, the passenger profile determination module608may be configured to assess and determine (e.g., automatically) whether the current designated driver or other passenger providing vehicle-control gestures is exhibiting a change in behavior that may impair that individual's ability to give proper or timely vehicle-control gestures. The passenger profile determination module608may receive inputs from sensors (e.g.,101), such as cameras, alcohol sensors, motion detectors, or the like and apply passenger motion pattern recognition to determine a level of impairment of a passenger. In response to the passenger profile determination module608detecting that a designated driver (or designated driver with highest priority) is impaired, the passenger profile determination module608may be configured to select a new designated driver or change a priority of designated drivers. In some embodiments, in response to the passenger profile determination module608detecting that a designated driver (or designated driver with highest priority) is impaired, that impaired designated driver may still be give some degree of control. For example, the impaired designated driver may be restricted to only providing non-navigational commands and/or commands that do not risk the safety of the vehicle or occupants (e.g., controlling cabin temperature, entertainment system volume). Alternatively, the impaired designated driver may be limited to fewer vehicular controls or a lesser degree of vehicle control (e.g., limiting to speed, changes in speed, number of lanes that may be changed in one continuous maneuver, etc.). Additionally or alternatively, the impaired designated driver may be allowed to provide navigational commands and/or controls, but only if they satisfy a safety threshold, which may be higher than the safety threshold used for drivers not considered impaired. For example, an impaired designated driver may be allowed to make a two-lane lane change on an empty highway, but not allowed to direct the same maneuver on a crowded highway. The vehicle-control gesture determination module610may be configured to determine when a passenger (e.g.,11,12,13) performs a vehicle-control gesture in such a way that is recognizable to a vehicle gesture detection system (e.g.,50). By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, one or more sensor(s) (e.g.,101), and a gesture recognition engine (e.g.,144) to determine whether one or more passengers has/have performed a vehicle-control gesture for operating the vehicle (e.g.,100). Once the vehicle-control gesture determination module608detects that a passenger has performed one or more vehicle-control gestures, information about the one or more vehicle-control gestures may be passed along to a control unit (e.g.,140). The vehicle action determination module612may be configured to determine which vehicle action or actions is/are associated with detected vehicle-control gestures. The vehicle action determination module612may also be configured to determine alternative vehicle actions, when detected vehicle-control gestures are associated with actions that are not safe to the vehicle and/or passengers, or unusual in some way. By way of non-limiting example, the vehicle action determination module612may use a processor (e.g.,164) of a processing device (e.g.,500), the electronic storage635, and a vehicle management system (e.g.,400,450) to determine vehicle actions. The passenger profile reception module616may be configured to receive and store passenger profiles. The passenger profile reception module616may receive passenger profiles that are customized to a passenger from training inputs. Additionally, or alternatively, the passenger profile reception module616may receive passenger profiles as input data through a vehicle user interface or from another computing device providing such data. For example, a passenger may follow a training protocol, where the passenger practices and/or performs gestures and passenger movements are recorded and analyzed to generate a passenger profile for that passenger. As a further example, a remote computing device may provide one or more passenger profiles to the passenger profile reception module616for application to vehicle-control gestures. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, one or more sensor(s) (e.g.,101), and input devices for receiving passenger profiles. The vehicle action safety determination module618may be configured to determine whether the first vehicle action associated with the detected first vehicle-control gesture is safe for the vehicle to execute. The determination of safety may ensure no damage or injury to the vehicle or the passengers. Typically, what is safe for the vehicle is also safe for the passengers and vice-versa (i.e., a level of risk to vehicle safety is equal to or approximate to a level of risk to passenger safety), but perhaps not always. For example, an extremely rapid deceleration may cause whiplash to a passenger, while the vehicle may sustain no damage. Thus, what is safe will generally prioritize the safety of the passenger(s). By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, and a vehicle management system (e.g.,400,450) to determine or assess the safety of various vehicle actions. In some implementations, the alternative vehicle action determination module620may be configured to determine one or more alternative vehicle actions (e.g., changing speed(s), changing to a different lane(s), etc.) that may be a safer alternative to a first vehicle action associated with a received vehicle-control gesture. The alternative vehicle action determination module620may be used in response to determining that a first vehicle action is not safe for the vehicle to execute. In another embodiment, the determined alternative vehicle actions may have a determined lower probability of resulting in damage to the vehicle and/or passengers, by at least some threshold amount, as compared to the first vehicle action. For example, a first vehicle action may involve changing lanes to a lane such that the vehicle will be traveling behind another vehicle. That first vehicle action may be relatively safe, but there may be a statistical chance (i.e., relatively small) that the vehicle in the lead could slam on its brakes soon after the lane change (i.e., a first level of risk to safety). Meanwhile, an alternative vehicle action may include first overtaking that vehicle in the lead before changing lanes, which may be associated with a lower statistical chance of the vehicle becoming involved in an accident because of the open road ahead of that lead vehicle (i.e., a second level of risk to safety, which is lower than the first level of risk to safety). By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, and a vehicle management system (e.g.,400,450) to determine alternative vehicle actions. The delay period assessment module622may be configured to determine whether a reasonable delay period may make an otherwise unsafe vehicle action safe enough for a vehicle to execute. For example, a first vehicle-control gesture may direct the vehicle to make a lane change that is unsafe due to another vehicle traveling in close proximity within that lane. The delay period assessment module622may determine that, because the other vehicle is traveling at a different speed, a delay of up to five (5) seconds before initiating the lane change may change that otherwise unsafe maneuver into a safe one. In further embodiments, at the end of the delay period if the move is still unsafe, the delay period assessment module622may re-determine whether an additional delay period may make the otherwise unsafe vehicle action safe enough for the vehicle to execute. Alternatively, the vehicle may notify the user (driver) of the determination not to perform the move and/or to ask for further input from the user. In various embodiments, a maximum delay threshold, such as (but not limited to) 5-10 seconds, may be used to limit the duration of delay periods that may be considered by the delay period assessment module622. The maximum delay threshold may be set and/or changed by a passenger, vehicle owner, and/or manufacturer. In addition, the maximum delay threshold may be different for each passenger (i.e., associated with a passenger profile). Alternatively, the maximum delay threshold may be universal for all passengers. As a further alternative, while individual passengers may have different maximum delay thresholds, the vehicle may also have an ultimate maximum delay threshold that the individual maximum delay thresholds may not exceed. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, and a vehicle management system (e.g.,400,450) to determine or assess delay periods for executing various vehicle actions. The unusual operation determination module624may be configured to determine whether a vehicle action associated with a vehicle-control gesture includes an unusual vehicle operation. Unusual operations may include vehicle operations that are not habitually or commonly performed by the vehicle (e.g., compared to the same vehicle's operations in the past, other vehicles or similar vehicles, vehicles under similar circumstances (e.g., location, time of day/year, weather conditions, etc.), such as a sudden action or actions that are significantly more extreme than actions typically performed by the vehicle. Whether a vehicle action is considered unusual may depend on whether that vehicle action has been performed before, as well as the speed, acceleration/deceleration, degree, and extend of the vehicle action. In the case of vehicle actions that have been performed before, but just not at the same speed, degree, and/or extent, the processor may use a threshold that once exceeded makes that vehicle action unusual. In addition, whether a vehicle action is considered unusual may depend on the current circumstances. For example, a single-lane change to the right on the highway may not be unusual from a center lane, but from the right-hand land (i.e., directing the vehicle onto a shoulder of the highway) may be evaluated by the unusual operation determination module624as unusual. Similarly, other circumstances, such as (but not limited to) location, weather conditions, time of day/year, lighting, etc., may be considered by the unusual operation determination module624. The determination, by the unusual operation determination module624, may be based on historical records from a passenger profile and/or a knowledge base of usual/unusual vehicle actions under specified conditions (i.e., expected norms). In some cases, the determination, by the unusual operation determination module624, that a vehicle action associated with a vehicle-control gesture is an unusual operation may be indicative of a false input. False inputs may result from, for example, a passenger gesture that was not performed correctly or was not intended to be a vehicle-control gesture (e.g., such as a gesticulation, or sneeze). Similarly, a passenger that is under the influence of a controlled substance or distracted may perform vehicle-control gestures incorrectly, inappropriately, and/or inadvertently. A determination that a vehicle-control gesture is unusual (i.e., a false input or otherwise) may be conveyed to the vehicle action determination module612, when assessing whether to follow that vehicle-control gesture or whether to determine an alternative vehicle action. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, and a vehicle management system (e.g.,400,450) to determine or assess whether vehicle actions are considered unusual. In various embodiments, the added indication determination module626may be configured to detect whether a passenger has performed an added indication in conjunction with an unusual operation. In some embodiments, the added indication may have been performed by the passenger together with the vehicle-control gesture associated with the unusual operation (e.g., before, during or after the unusual operation). The added indication may be performed within some window of the unusual operation (e.g., within 2-3 seconds before/after the unusual operation). In some embodiments, the added indication may be performed in response to a prompt associated with the unusual operation (e.g., the vehicle notifies the driver of the unusual operation requesting confirmation or clarification). The added indication may suggest that a passenger, who has performed an unusual vehicle-control gesture, truly intended to execute the unusual operation. For example, the added indication may include at least one of an exaggerated gesture, a repeated gesture, a gesture performed more quickly than usual, or a non-visual input (e.g., audio) received in conjunction with (e.g., contemporaneously) with the detected vehicle-control gesture. In some embodiments, in response to the unusual operation determination module624determining that a received vehicle-control gesture is unusual, the system may prompt the passenger for confirmation or clarification, and the response (e.g., gesture of audio command) by the user, which is received as an added indication, may confirm or clarify the vehicle-control gesture. The prompt to the passenger may be simple (e.g., a chime or short series of tones) or may include more detailed feedback (e.g., verbal feedback) that informs the passenger that an added indication is needed. Verbal feedback or distinct tonal sequences may thus inform the passenger that the received vehicle-control gesture will result in an unusual vehicle operation and thus confirmation is needed. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, one or more sensor(s) (e.g.,101), and a gesture recognition engine (e.g.,144) to determine whether a passenger has performed an added indication associated with a vehicle-control gesture. In some embodiments, even when the user confirms the unusual operation is intended, the vehicle operation module629may operate the vehicle to implement a less extreme version of the unusual operation (e.g., if implementing the unusual operation exceeds a safety threshold). The less extreme version of the unusual operation may involve a lesser degree of a maneuver (e.g., a single-lane change instead of a double or triple-lane change; increasing speed by 5 mph instead of 25 mph, etc.). In some embodiments, if the unusual operation is too extreme (i.e., exceeds a safety threshold), the unusual operation may be ignored and the passenger informed that the unusual operation is unsafe. For example, the vehicle control system may verbally explain that the unusual operation is being ignored. The unusual operation safety assessment module628may be configured to determine whether any unusual vehicle operation detected by the unusual operation determination module624is safe for the vehicle or occupants. By way of non-limiting example, a processor (e.g.,164) of a processing device (e.g.,500) may use the electronic storage635, and a vehicle management system (e.g.,400,450) to determine or assess whether vehicle actions are considered unusual. The vehicle operation module629may be configured to operate the vehicle to implement vehicle actions, such as in response to determining that a particular vehicle action is safe for the vehicle and occupants. The vehicle operation module629may also operate the vehicle to implement alternative and/or other vehicle actions as needed. For example, the vehicle operation module629may operate the vehicle to implement a second vehicle action in response to determining that the second vehicle action is available. Similarly, the vehicle operation module629may operate the vehicle to implement the first vehicle action after a determined delay period in response to determining that the first vehicle action associated with a detected first vehicle-control gesture is safe for the vehicle to execute after the determined delay period. Further, the vehicle operation module629may operate the vehicle to implement a second vehicle action that is a safer alternative to the first vehicle action in response to determining that the unusual vehicle operation is not safe (i.e., unsafe) for the vehicle or occupants. In this way, the vehicle operation module629may implement the alternative vehicle action determined by the alternative vehicle action determination module620. By way of non-limiting example, the vehicle operation module629may use a processor (e.g.,164) of a processing device (e.g.,500), the electronic storage635, and a vehicle management system (e.g.,400,450) to operate the vehicle (e.g., execute vehicle actions). In accordance with various embodiment, the vehicle operation module629may implement the vehicle actions autonomously. Thus, although the implemented vehicle actions result from passenger input (i.e., from a received vehicle-control gesture), the vehicle operation module629may perform the necessary functions to operation the vehicle safely without the vehicle coming out of an autonomous mode. Under some circumstances, the vehicle operation module629may implement one or more alternative vehicle action(s) determined to be safer than the vehicle action(s) more strictly associated with one or more received vehicle-control gesture(s). In some embodiments, vehicle computing system(s)602, other vehicle computing system(s)604may communicate with one another via a wireless network (e.g.,180), such as V2V wireless communication links. Additionally, the vehicle computing system(s)602and other vehicle computing system(s)604may be connected to wireless communication networks that provide access to external resources630. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes embodiments in which vehicle computing system(s)602, other vehicle computing system(s)604, and/or external resources630may be operatively linked via some other communication media. The other vehicle computing system604may also include one or more processors configured to execute computer program modules configured by machine-executable instructions606. Machine-executable instructions606may include one or more instruction modules that may include one or more of the passenger profile determination module608, vehicle-control gesture determination module610, vehicle action determination module612, passenger identification module607, passenger profile reception module616, vehicle action safety determination module618, alternative vehicle action determination module620, delay period assessment module622, unusual operation determination module624, added indication determination module626, unusual operation safety assessment module628, vehicle operation module629, and/or other instruction modules similar to the vehicle computing system602of a first vehicle as described. External resources630may include sources of information outside of system600, external entities participating with the system600, and/or other resources. For example, external resources630may include map data resources, highway information (e.g., traffic, construction, etc.) systems, weather forecast services, etc. In some embodiments, some or all of the functionality attributed herein to external resources630may be provided by resources included in system600. Vehicle computing system(s)602may include electronic storage635, one or more processors164, and/or other components. Vehicle computing system(s)602may include communication lines, or ports to enable the exchange of information with a network and/or other vehicle computing system. Illustration of vehicle computing system(s)602inFIG.6is not intended to be limiting. Vehicle computing system(s)602may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to vehicle computing system(s)602. For example, vehicle computing system(s)602may be implemented by a cloud of vehicle computing systems operating together as vehicle computing system(s)602. Electronic storage635may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage635may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with vehicle computing system(s)602and/or removable storage that is removably connectable to vehicle computing system(s)602via, for example, a port (e.g., a universal serial bus (USB) port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage635may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage635may store software algorithms, information determined by processor(s)164, information received from vehicle computing system(s)602, information received from other vehicle computing system(s)604, and/or other information that enables vehicle computing system(s)602to function as described herein. Processor(s)164may be configured to provide information processing capabilities in vehicle computing system(s)602. As such, processor(s)164may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s)164is shown inFIG.6as a single entity, this is for illustrative purposes only. In some embodiments, processor(s)164may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s)164may represent processing functionality of a plurality of devices operating in coordination. Processor(s)164may be configured to execute modules608,607,610,612,616,618,620,622,624,626,628, and/or629, and/or other modules. Processor(s)164may be configured to execute modules608,607,610,612,616,618,620,622,624,626,628, and/or629, and/or other modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s)164. As used herein, the term “module” may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components. It should be appreciated that although modules608,607,610,612,616,618,620,622,624,626,628, and/or629are illustrated inFIG.6as being implemented within a single processing unit, in embodiments in which processor(s)164includes multiple processing units, one or more of modules608,607,610,612,616,618,620,622,624,626,628, and/or629may be implemented remotely from the other modules. The description of the functionality provided by the different modules608,607,610,612,616,618,620,622,624,626,628, and/or629described below is for illustrative purposes, and is not intended to be limiting, as any of modules608,607,610,612,616,618,620,622,624,626,628, and/or629may provide more or less functionality than is described. For example, one or more of modules608,607,610,612,616,618,620,622,624,626,628, and/or629may be eliminated, and some or all of its functionality may be provided by other ones of modules608,607,610,612,616,618,620,622,624,626,628, and/or629. As another example, processor(s)164may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules608,607,610,612,616,618,620,622,624,626,628, and/or629. FIGS.7A,7B, and/or7C illustrate operations of methods700,703, and705, respectively, for operating a vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. With reference toFIGS.1A-7C, the methods700,703, and705may be implemented in a processor (e.g.,164), a processing device (e.g.,500), and/or a control unit (e.g.,140) (variously referred to as a “processor”) of a vehicle (e.g.,100). In some embodiments, the methods700,703, and705may be performed by one or more layers within a vehicle management system stack, such as a vehicle management system (e.g.,400,450). In some embodiments, the methods700,703, and705may be performed by a processor independently from, but in conjunction with, a vehicle control system stack, such as the vehicle management system. For example, the methods700,703, and705may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system and is configured to take actions and store data as described. FIG.7Aillustrates a method700for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In block702, a vehicle processor may determine a first vehicle action by applying a first passenger profile to a detected first vehicle-control gesture performed by a first passenger. The first passenger profile may be selected from a plurality of passenger profiles to normalize vehicle-control gestures received from the first passenger. For example, the processor may detect a vehicle-control gesture in the form of a passenger holding an open palm forward (i.e., an indication to stop). After applying the passenger profile assigned to the current passenger, the processor may determine this gesture means the passenger wants the vehicle to come to a stop. In some embodiments, means for performing the operations of block702may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101) and electronic storage (e.g.,635). To make the determination in block702, the processor may use the vehicle action determination module (e.g.,612). In block704, the vehicle processor may operate the vehicle to implement the first vehicle action in response to determining that the first vehicle action is safe for the vehicle and occupants. For example, the processor may cause the vehicle to come to a stop. In some embodiments, means for performing the operations of block704may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a vehicle management system (e.g.,400,450). To make the determination in block704, the processor may use the vehicle operation module (e.g.,629). In some embodiments, the processor may repeat the operations in blocks702and704to periodically or continuously operate the vehicle based on vehicle-control gestures by a passenger. FIG.7Billustrates a method703for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In block706, the processor of the first vehicle may identify the first passenger based on at least one of a position of the first passenger in the vehicle, an input by the first passenger, or recognition of the first passenger. For example, the processor may recognize the passenger as being seated in the front left seat, may receive a passenger profile or at least identification information from the passenger, or facial recognition software my recognize the passenger using onboard imaging. In some embodiments, means for performing the operations of block706may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a passenger identification module (e.g.,607). In block708, the processor may select the first passenger profile based on the identity of the first passenger. For example, the processor may select a passenger profile unique to the current occupant, based on an identification card presented by the passenger when the vehicle was started. In some embodiments, means for performing the operations of block708may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a passenger profile determination module (e.g.,608). Following the operations in block708in the method703, the processor may execute the operations of block702of the method700as described. In some embodiments, the processor may repeat any or all of the operations in blocks706and708to repeatedly or continuously select passenger profiles as needed. FIG.7Cillustrates method705for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In block710, the processor may perform operations including receiving, from a remote computing device, the first passenger profile for application to vehicle-control gestures. For example, the processor may receive the first passenger profile as data received through a radio module (e.g.,172) or input module (e.g.,168). In some embodiments, means for performing the operations of block710may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a passenger profile reception module (e.g.,616). Following the operations in block710in the method705, the processor may execute the operations of block702of the method700as described. In some embodiments, the processor may repeat the operation in block710to repeatedly or continuously receive passenger profiles or updates to passenger profiles. Further embodiments include methods and devices that include determining whether a first vehicle action associated with a detected and recognized first vehicle-control gesture is safe for the vehicle to execute, determining whether a second vehicle action that is a safer alternative to the first vehicle action is available for the vehicle to execute in response to determining the first vehicle action is not safe for the vehicle to execute, and operating the vehicle to implement the second vehicle action in response to determining that the second vehicle action is available. In some embodiments, a processor of the vehicle may determine whether the first vehicle action associated with the detected first vehicle-control gesture is safe for the vehicle to execute after a determined delay period. Also, the processor may operate the vehicle to implement the first vehicle action or an alternative second vehicle action after the determined delay period in response to determining that the first vehicle action or the alternative second vehicle action is safe for the vehicle to execute after the determined delay period. In some embodiments, a processor of the vehicle may determine whether the first vehicle action includes an unusual vehicle operation. In some embodiments, the processor may determine whether the detected first vehicle-control gesture includes an added indication that the unusual vehicle operation is intended by the first passenger in response to determining that the first vehicle action includes the unusual vehicle operation, in which operating the vehicle to implement the first vehicle action is further in response to determining that the detected first vehicle-control gesture includes the added indication that the unusual vehicle operation is intended by the first passenger. In some embodiments, the added indication may include at least one of an exaggerated gesture, a repeated gesture, a gesture performed more quickly than usual, or a non-visual input (e.g., audio) received in conjunction with (e.g., contemporaneously) with the detected first vehicle-control gesture. In some embodiments, the processor may operate the vehicle to implement the first vehicle action in response to determining that the first vehicle action is not safe for the vehicle or occupants, and determining that the detected first vehicle-control gesture includes the added indication that the drastic vehicle operation is intended by the first passenger. FIGS.8A and8Billustrate example situations800,801in which a processor of a vehicle100, which is approaching two other vehicles804,806, and the processor overrides a passenger's vehicle-control gesture in accordance with various embodiments. With reference toFIGS.1A-8B, the three vehicles804,806,100are all traveling in the same direction on a roadway802. The roadway802happens to be a three-lane road. The third vehicle100may be a semi-autonomous vehicle in accordance with various embodiments. The methods and systems of various embodiments may be applied to any pathway, whether or not it is a paved and clearly marked road. With reference toFIG.8A, the two lead vehicles804,806have collided, blocking the middle and right lanes (i.e., the middle and furthest right lane in the orientation shown inFIG.8), as the third vehicle100approaches the collision in the middle lane. In accordance with various embodiments, a processor of the third vehicle100has detected a vehicle-control gesture808performed by a passenger13(e.g., the rear-seat passenger13inFIG.2B). Although, the passenger13is in a rear seat, in some embodiments the processor may recognize and accept vehicle-control gestures from passengers in any seat within the vehicle100. For example, a generic rear-seated passenger profile may be (but not necessarily) applied by the processor for determining a first vehicle action810that corresponds to the vehicle-control gesture808performed by the passenger13. For example, since a rear-seat passenger13may have limited or even partially obstructed visibility, the generic rear-seated passenger profile may more easily over-ride commands from that type of passenger. In this instance, the first vehicle action810would steer the third vehicle100into the right-hand lane, which is blocked by the collision. In the illustrated situation inFIG.8A, the processor of the third vehicle100may determine whether the first vehicle action810associated with the detected first vehicle-control gesture808is safe for the vehicle100to execute. Considering that the first vehicle action810would either lead to the third vehicle100also being involved in the collision or cause the third vehicle100to have to come to a stop very suddenly to avoid the impact. In this situation, the processor may conclude that the first vehicle action810indicated by the passenger13is unsafe for the vehicle100and/or passengers. In response to determining that the first vehicle action810is not safe for the vehicle100to execute, the processor may determine a second vehicle action820that is a safer alternative to the first vehicle action810in. In the illustrated example, the second vehicle action820involves the third vehicle100steering into the left-hand lane, which avoids the accident. In accordance with some embodiments, the processor of the third vehicle100may determine whether the first vehicle action810includes an unusual vehicle operation. In the illustrated example, although the first vehicle action810would require the vehicle to decelerate to a stop very suddenly (i.e., an unusual maneuver), if the processor can perform the vehicle action safely, it may do so. For example, in addition to changing to the furthest right lane, the vehicle may need to move slightly or entirely to the right shoulder, but can still come to a stop behind the second lead vehicle806. Alternatively, the vehicle may automatically perform the second vehicle action820and then come to a stop in the right land ahead of the two lead vehicles804,806. Thus, regardless of whether the passenger13provided added indications that the passenger13meant to perform the first vehicle-control gesture808, which would result in a sudden and dangerous stop, the processor may operate the third vehicle100to implement the second vehicle action820that is a safer alternative to the first vehicle action810in response to determining that the unusual vehicle operation is not safe for the vehicle100or passenger13. In addition, the processor may determine that such sudden breaking-type maneuvers require additional vehicle actions, such as turning on the hazard lights, locking the seatbelts, changing headlight settings, reducing entertainment system volume, etc., which the processor may automatically execute. FIG.8Billustrates a situation in which two lead vehicles804,806are both traveling in the furthest right lane with the second lead vehicle806tailgating the first lead vehicle804(i.e., driving at a dangerously close proximity P, considering the speed of the two lead vehicles804,806). Meanwhile, the third vehicle100is approaching the two lead vehicles in the middle lane, and a processor of the third vehicle100has detected a vehicle-control gesture808performed by the passenger13(e.g., the rear-seat passenger13inFIG.2B). In some embodiments, the processor executing the vehicle-control gesture determination module610may not use a passenger profile (e.g., if no passenger profile is available for the designated driver and/or other passengers), but rather uses generic settings for normalizing vehicle-control gestures for determining a third vehicle action830that corresponds to the vehicle-control gesture808performed by the passenger13. Alternatively, the processor executing the vehicle-control gesture determination module610may use a default passenger profile or a profile preselected for the designated driver or other passenger providing vehicle-control gestures. In this instance, the third vehicle action830would steer the third vehicle100into the right-hand lane, closely behind the second lead vehicle806. In the situation illustrated inFIG.8B, the processor of the third vehicle100may determine whether the third vehicle action830associated with the detected first vehicle-control gesture808is safe for the third vehicle100to execute. Considering that the third vehicle action830would lead to the third vehicle100following closely behind the second lead vehicle806, such that if the first or second lead vehicle804,806stopped suddenly, the third vehicle100may not be able to avoid a collision. In this situation, the processor may conclude that the third vehicle action830indicated by the passenger13is unsafe for the third vehicle100and/or passengers. In response to determining that the third vehicle action830is not safe for the third vehicle100to execute, the processor may determine a fourth vehicle action840that is a safer alternative to the third vehicle action830. In the illustrated example, the fourth vehicle action840involves the third vehicle100delaying the lane-change maneuver associated with the first vehicle action830(e.g., a delay of10seconds), until after the vehicle100has safely overtaken the two lead vehicle804,806. In this way, the processor of the third vehicle100may determine a first level of safety associated with the third vehicle action830, which may be below a safety threshold in this regard. Accordingly, the processor of the third vehicle100may determine whether an added delay before executing the lane-change maneuver would be not only safer than the third vehicle action, but also be associated with a second level of safety that is above the safety threshold. In the illustrated example, the processor may determine a fourth vehicle action840, which is similar to the third vehicle action830, but include a delay so the third vehicle100overtakes the two lead vehicles804,806. In some implementations, after the processor determined that the third vehicle action830inFIG.8Bmay be unsafe, the processor may prompt the passenger13for input (e.g., a verbal inquiry to the passenger saying, “It would be safer to pass the vehicles ahead in the right lane before changing lanes, would you prefer to pass the vehicles ahead before changing lanes?”). In response, the passenger13may provide an input (e.g., a verbal response or another vehicle-control gesture), which may be interpreted by the processor (e.g., using the added indication determination module626). Thus, the passenger13may provide an input that agrees with the proposed fourth vehicle action840, disagrees and provides an added indication that the third vehicle action830is desired (e.g., emphatically repeating the vehicle-control gesture808), or provides another input (e.g., a new vehicle-control gesture or simply canceling the third vehicle action830). In some implementations, in addition to determining that the third vehicle action830is unsafe inFIG.8B, the vehicle may execute a different maneuver (than simply adding a delay) that achieves what the original vehicle-control gesture808intended. For example, the processor may detect that one or both of the lead vehicles804,806is/are driving erratically (e.g., swerving in the lane) or some other condition making continued travel in the center land questionably safe. Thus, rather than just suggesting a delay action, such as the fourth vehicle action840, the processor may prompt the passenger13for input regarding or automatically execute a fifth vehicle action850(i.e., an alternative vehicle action). For instance, the fifth vehicle action850may involve changing lanes to the far left lane (and in some cases a temporary increase in speed), before overtaking the two lead vehicles, and then changing lanes to the far right lane (and in some cases reverting to the original speed or other speed before passing the other vehicles), thus still executing a maneuver that achieves what the original vehicle-control gesture808intended, but in a safer way. Once again, the passenger13may provide inputs (e.g., a verbal response or another vehicle-control gesture), which may be interpreted by the processor (e.g., using the added indication determination module626), confirming the fifth vehicle action850, canceling the original control gesture808, providing another input, or the like. As a further alternative, the processor need not wait for passenger input (e.g., an added indication) after determining the initial vehicle-control gesture is unsafe, and may execute the safest vehicle action available that comes close to complying with the original control gesture808and operating the vehicle safely. FIGS.9A,9B,9C, and9Dillustrate operations of methods900,903,905, and907respectively, for operating a vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. With reference toFIGS.1A-9D, the methods900,903,905, and907may be implemented in a processor (e.g.,164), a processing device (e.g.,500), and/or a control unit (e.g.,140) (variously referred to as a “processor”) of a vehicle (e.g.,100). In some embodiments, the methods900,903,905, and907may be performed by one or more layers within a vehicle management system stack, such as a vehicle management system (e.g.,400,450). In some embodiments, the methods900,903,905, and907may be performed by a processor independently from, but in conjunction with, a vehicle control system stack, such as the vehicle management system. For example, the methods900,903,905, and907may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system and is configured to take actions and store data as described. FIG.9Aillustrates a method900for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In some implementations of the method900, the determinations in determination block902may follow the operations in block702(as described with regard to the method700). However, in an alternative implementation of the method900, the determinations in determination block902may follow the operations in alternative block901. In alternative block901, a vehicle processor may determine a first vehicle action from a detected first vehicle-control gesture performed by a first passenger. For example, the processor may detect a vehicle-control gesture in the form of a passenger circling their finger/hand up past their face and back down again repeatedly (i.e., an indication to speed up) or other gesture or input. Using a knowledge-base to recognize such movement, the processor may determine that this gesture means the passenger wants the vehicle to accelerate. In some embodiments, means for performing the operations of alternative block901may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101) and electronic storage (e.g.,635). To make the determination in alternative block901, the processor may use the vehicle action determination module (e.g.,612). In determination block902, following the operations in block702(as described with regard to the method700), a vehicle processor may determine whether the first vehicle action associated with the detected first vehicle-control gesture is safe for the vehicle to execute. For example, as described above with regard toFIG.8, the processor may detect a vehicle-control gesture that is dangerous, unsafe, and/or very unusual. After applying a passenger profile assigned to the rear seat in which the passenger is seated, the processor may determine this gesture means the passenger wants the vehicle to come to a sudden stop. In some embodiments, means for performing the operations of determination block902may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101), electronic storage (e.g.,635), and a vehicle management system (e.g.,400,450). To make the determination in determination block902, the processor may use the vehicle action safety determination module (e.g.,618). In response to the processor determining that the first vehicle action associated with the detected first vehicle-control gesture is safe for the vehicle to execute (i.e., determination block902=“Yes”), the processor may follow the operations in block704as described above with regard to the method700. In response to the processor determining that the first vehicle action associated with the detected first vehicle-control gesture is not safe for the vehicle to execute (i.e., determination block902=“No”), the processor may determine a second vehicle action that is a safer alternative to the first vehicle action for the vehicle to execute in block908. In some embodiments, means for performing the operations of determination block902may include a processor (e.g.,164) to one or more sensors (e.g.,101), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the vehicle action safety determination module (e.g.,618). In block908, the vehicle processor may determine a second vehicle action that is a safer alternative to the first vehicle action for the vehicle to execute. For example, the processor may determine an alternative vehicle action. In some embodiments, means for performing the operations of block908may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a vehicle management system (e.g.,400,450). To make the determination in block908, the processor may use the alternative vehicle action determination module (e.g.,620). In block910, the vehicle processor may operate the vehicle to implement the second vehicle action in response to determining that the second vehicle action is a safer alternative to the first vehicle action. For example, the processor may determine the first vehicle action is dangerous and/or likely to cause a collision. In some embodiments, means for performing the operations of block910may include a processor (e.g.,164) coupled to electronic storage (e.g.,635) and a vehicle management system (e.g.,400,450). To make the determination in block910, the processor may use the vehicle operation module (e.g.,629). In some embodiments, the processor may repeat the operations in determination block902and blocks908and910to periodically or continuously operate the vehicle based on vehicle-control gestures by a passenger. FIG.9Billustrates a method903for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In response to the processor determining that the first vehicle action associated with the detected first vehicle-control gesture is not safe for the vehicle to execute (i.e., determination block902=“No”), a vehicle processor may determine whether the first vehicle action associated with the detected first vehicle-control gesture is safe for the vehicle to execute after a determined delay period in determination block904. For example, as described above with regard toFIG.8, if the vehicle were to immediately steer into the right-hand lane it would be dangerous, but if the two lead vehicles (804,806) shift into the two left lanes or the left-most lane, then steering into the right-hand land after a brief delay (e.g., one or two seconds of pumping the breaks) may be the safest route. Thus, the processor may determine whether a delay period might change the determination made in determination block902of the method900. In some embodiments, means for performing the operations of determination block904may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101), electronic storage (e.g.,635), and a vehicle management system (e.g.,400,450). To make the determination in determination block904, the processor may use the delay period assessment module (e.g.,622). In response to the processor determining that the first vehicle action associated with the detected first vehicle-control gesture (e.g.,63) is safe for the vehicle to execute after the determined delay period (i.e., determination block904=“Yes”), the processor may operate the vehicle to implement the first vehicle action after the determined delay period in block906. In response to the processor determining that the first vehicle action associated with the detected first vehicle-control gesture is not safe for the vehicle to execute (i.e., determination block902=“No”), the processor may follow the operations in block908of the method900as described. In some embodiments, means for performing the operations of determination block904may include a processor (e.g.,164) to one or more sensors (e.g.,101), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), the vehicle action safety determination module (e.g.,618), and the delay period assessment module622. In some embodiments, the processor may repeat any or all of the operations in determination block904and block906to repeatedly or continuously determine how to operate the vehicle as needed. FIG.9Cillustrates a method905for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. Following the operations in block702of the method700or alternative block901of the method900as described, a vehicle processor may determine whether the first vehicle action includes an unusual vehicle operation in determination block912. For example, if the first vehicle action includes a maneuver that has never been performed before, is erratic, and/or dangerous the processor may conclude the maneuver is unusual. Thus, the processor may assess and determine the first vehicle action includes an unusual vehicle operation in determination block912of the method905. In some embodiments, means for performing the operations of determination block912may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101), electronic storage (e.g.,635), and a vehicle management system (e.g.,400,450). To make the determination in determination block912, the processor may use the unusual operation safety assessment module (e.g.,628). In response to the processor determining that the first vehicle action includes an unusual vehicle operation (i.e., determination block912=“No”), the processor may operate the vehicle to implement the first vehicle action in block704of method700as described. In response to the processor determining that the first vehicle action includes an unusual vehicle operation (i.e., determination block912=“Yes”), the processor may determine whether the detected first vehicle-control gesture includes an added indication that the unusual vehicle operation is intended by the first passenger in determination block914. In some embodiments, means for performing the operations of determination block912may include a processor (e.g.,164) to one or more sensors (e.g.,101), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the added indication determination module (e.g.,626). In determination block914, the vehicle processor may determine whether the detected first vehicle-control gesture includes an added indication that the unusual vehicle operation is intended by the first passenger. For example, the passenger may exaggerate a gesture, repeated the gesture, perform a gesture more quickly than usual, or provide a non-visual input (e.g., an audio input) received in conjunction with the detected first vehicle-control gesture. In response to the processor determining that the detected first vehicle-control gesture includes an added indication that the unusual vehicle operation is intended by the first passenger (i.e., determination block914=“Yes”), the processor may operate the vehicle to implement the first vehicle action in block704of the method700as described. In response to the processor determining that the detected first vehicle-control gesture does not include an added indication that the unusual vehicle operation is intended by the first passenger (i.e., determination block914=“No”), the processor may prompt passenger(s) (e.g., the first passenger) for an added indication in block916. In block916, one or more passengers may be prompted (e.g., tone, light, vibration, etc.) for an added indication that the detected first vehicle-control gesture was intended or is now intended. As described with regard to the added indication determination module626, the prompt to the passenger(s) may include details that inform the passenger(s) about or otherwise correspond to the type of added indication that is needed. In some implementations, the processor may only process responses from the passenger that made the detected first vehicle-control gesture. Alternatively, the processor may accept responses from any designated driver or from any passenger. To give the passenger(s) time to respond, the prompt for an added indication may give the passenger(s), or at least the passenger making the detected first vehicle-control gesture, an allotted time to respond before the processor moves-on and determines an alternative action in block908. For example, the allotted time may be 3-5 seconds. In some embodiments, the length of the time may depend on current circumstances. For example, when the vehicle is traveling at high speeds the processor may wait a short amount of time before taking an action without an added indication than when the vehicle traveling at low speeds. Similarly, other circumstances, such as location, weather conditions, time of day/year, lighting, etc., may be taken into account in determining how long the processor should wait for an added indication from one or more passengers. The means for prompting passengers for the operations of block916may include a processor (e.g.,164), a speaker (e.g., through a vehicle entertainment system), other vehicle computing system(s)604, external resources (e.g.,630), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the added indication determination module (e.g.,626). During the allotted time (i.e., before expiration of the allotted time), the processor may determine whether an added indication is received in determination block918. The means for performing the operations of determination block918may include a processor (e.g.,164), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the added indication determination module (e.g.,626). In response to receiving an added indication during the allotted time (i.e., determination block918=“Yes”), the processor may operate the vehicle to implement the first vehicle action in block704of the method700as described. In response to not receiving an added indication during the allotted time (i.e., determination block918=“No”), the processor may follow the operations in block908of the method900as described. In some embodiments, means for performing the operations of determination block914may include a processor (e.g.,164) to one or more sensors (e.g.,101), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the unusual operation safety assessment module (e.g.,628). In some embodiments, the processor may repeat any or all of the operations in determination blocks912and914to repeatedly or continuously determine how to operate the vehicle as needed. FIG.9Dillustrates method907for operating the vehicle based on vehicle-control gestures by a passenger in accordance with various embodiments. In response to determining that the detected first vehicle-control gesture includes an added indication that the unusual vehicle operation is intended by the first passenger (i.e., determination block914=“Yes”), the vehicle processor may determine whether the unusual vehicle operation is safe for the vehicle or occupants in determination block920. For example, even though the passenger gave an indication that the unusual vehicle operation was intentional, if the unusual vehicle operation is unsafe, a processor may reject or prevent the vehicle from executing the operation. Thus, the processor may assess and determine how safe the unusual vehicle operations is for the vehicle and occupants in determination block920of the method907. In some embodiments, means for performing the operations of determination block920may include a processor (e.g.,164) coupled to one or more sensors (e.g.,101), electronic storage (e.g.,635), and a vehicle management system (e.g.,400,450). To make the determination in determination block920, the processor may use the unusual operation safety assessment module (e.g.,628). In response to the processor determining that the unusual vehicle operation is not safe (i.e., determination block920=“No”), the processor may determine a second vehicle action that is a safer alternative to the first vehicle action for the vehicle to execute in block908as described above with regard to the method700. In response to the processor determining that the unusual vehicle operation is safe (i.e., determination block920=“Yes”), the processor may operate the vehicle to implement the first vehicle action in block922. In some embodiments, means for performing the operations of determination block922may include a processor (e.g.,164) to one or more sensors (e.g.,101), electronic storage (e.g.,635), a vehicle management system (e.g.,400,450), and the unusual operation safety assessment module (e.g.,628). In some embodiments, the processor may repeat any or all of the operations in determination block920and block922to repeatedly or continuously determine how to operate the vehicle as needed. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of blocks in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular. The various illustrative logical blocks, modules, circuits, and algorithm blocks described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and blocks have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such embodiment decisions should not be interpreted as causing a departure from the scope of various embodiments. The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of communication devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some blocks or methods may be performed by circuitry that is specific to a given function. In various embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the embodiments. Thus, various embodiments are not intended to be limited to the embodiments shown herein but are to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. | 133,576 |
11858533 | DETAILED DESCRIPTION According to certain aspects, implementations in the present disclosure relate to a system and a method for controlling a vehicle using light detection and ranging (lidar), and more particularly to a system and a method for controlling a vehicle by detecting an object using a depolarization ratio of a return signal reflected by the object. According to certain aspects, an autonomous vehicle control system may include one or more processors. The one or more processors may be configured to cause a transmitter to transmit a transmit signal from a laser source. The one or more processors may be configured to cause a receiver to receive a return signal reflected by an object. The one or more processors may be configured to cause one or more optics to generate a first polarized signal of the return signal with a first polarization, and generate a second polarized signal of the return signal with a second polarization that is orthogonal to the first polarization. The one or more processors may be configured to operate a vehicle based on a ratio of reflectivity between the first polarized signal and the second polarized signal. In a conventional lidar system, laser signal (LS) is linearly polarized. In reflecting a signal transmitted from the lidar system, many objects in the world depolarize the return signal. For example, a return signal of each object may come back with the same polarization state as the polarized LS or with different polarization state from the polarized LS. However, the lidar system may detect only the polarized part of the return signal consistent with the polarized LS. As a result, the other polarization of the return signal is not measured or utilized. To solve this problem, in some implementations, a polarization-sensitive lidar can be provided by splitting out the return light into two polarization states, so that they can be detected independently. The signals split into the two polarization states can then be compared to estimate how much the object (or target) has depolarized the return signal. A lidar system may include a polarization beam splitter (PBS). The PBS may be a polarization beam splitters/combiners (PBSC). The split return signals may indicate respective separate images of the object in different polarization states. The lidar system may calculate a ratio between the separate images of the object in different polarization states (referred to as “a depolarization ratio”). In some implementations, the lidar system may include two detectors configured to detect respective polarized signals from the return signal using a splitter. The splitter may be a polarization beam splitter (PBS). The lidar system may include a single detector configured to detect two polarized signals from the return signal using a splitter and a phase shifter by multiplexing the return signal into the single detector. In some implementations, in response to detecting two polarized signals, one or more detectors may generate corresponding two electrical signals in separate channels so that the electrical signals can be processed independently by a processing system. The two electrical signals may indicate respective separate images of the object in different polarization states. The lidar system may calculate a depolarization ratio between the separate images of the object in different polarization states. In some implementations, a system may have two beams of polarization sensitive lidar, each of which has two receive/digitizer channels for independent signal processing. The system may process, in two independent channels, independent streams from a point of cloud. In some implementations, a processing system may perform a simple post processing on the electrical signals in each channel to produce imagery for the object in the respective polarization. The processing system may calculate an average reflectivity of an image of the object over a plurality of samples. The processing system may perform a spatial averaging per voxel. For example, the processing system may include a hash-based voxelizer to efficiently generate a plurality of voxels representing the object in a polarization state. With a hash-based voxelizer, the processing system can perform a quick search for a voxel. The processing system may calculate an average reflectivity within each voxel of the plurality of voxels over a plurality of samples. As a result, the calculated average reflectivity may be different from measured values of reflectivity. For example, each measurement value is not correlated while an average may be correlated. The number of samples for averaging is less than 100. For example, 5 or 12 samples may be used. The processing system may calculate a ratio of the average reflectivity within each voxel between two channels. In some implementations, an average reflectivity over some region of a space (e.g., over some voxels) may be meaningful to understand polarization of the object. In some implementations, the use of average reflectivity can effectively reduce variance in the reflectivity measurement caused by laser speckle, when a coherent lidar system is used, for example. The average reflectivity of voxel over a plurality of samples may be affected by selection of parameters, for example, spatial resolution or precision (e.g., a voxel of 5 cm×10 cm dimension). In general, averaging of smaller voxel may have fewer contribution to averaging of the whole image of the object, while it may have more contribution to contrast of the image (make the object more distinguishable). Selection of proper parameters can provide another dimension from pure reflectivity measurement, thereby increasing value of data. According to certain aspects, implementations in the present disclosure relate to a light detection and ranging (lidar) system that includes a transmitter configured to transmit a transmit signal from a laser source, a receiver configured to receive a return signal reflected by an object, one or more optics, and a processor. The one or more optics may be configured to generate a first polarized signal of the return signal with a first polarization, and generate a second polarized signal of the return signal with a second polarization that is orthogonal to the first polarization. The processor may be configured to calculate a ratio of reflectivity between the first polarized signal and the second polarized signal. According to certain aspects, implementations in the present disclosure relate to a method that includes transmitting, from a laser source, a transmit signal and receiving a return signal reflected by an object. The method may include generating, by one or more optics, a first polarized signal of the return signal with a first polarization. The method may include generating, by the one or more optics, a second polarized signal of the return signal with a second polarization that is orthogonal to the first polarization. The method may include operating, by one or more processors, a vehicle based on a ratio of reflectivity between the first polarized signal and the second polarized signal. Various implementations in the present disclosure have one or more of the following advantages and benefits. First, implementations in the present disclosure can provide useful techniques for improving disambiguation of different objects using another unique signal mapped onto the point cloud based on a depolarization ratio in addition to signals conventionally used, e.g., reflectivity signals only. In some implementations, object detection based on a depolarization ratio can disambiguate several key surfaces, for example, (1) asphalt (vs. grass, rough concrete, or gravel), (2) metal poles (vs. trees or telephone/utility poles), (3) retro-signs (vs. metal surfaces), (4) lane markings (vs. road surfaces), and (5) vehicle plate (vs. vehicle surfaces). This disambiguation techniques can help recognize certain road markings, signs, pedestrians, etc. The depolarization ratio can detect or recognize sparse features (e.g., features having a relatively smaller region), so that such sparse features can be easily registered for disambiguation of different objects. Disambiguation of different objects or materials based on the depolarization ratio can help a perception system to more accurately detect, track, determine, and/or classify objects within the environment surrounding the vehicle (e.g., using artificial intelligence techniques). Second, implementations in the present disclosure can provide useful techniques for improving stability and accuracy of object detection by utilizing a differential measurement (e.g., ratio of reflectivity between signals with different polarization states) which should lead to low variance across changing conditions (e.g., weather changes—snow, ice, rain, etc.). Third, implementations in the present disclosure can provide useful techniques for making lidar systems more interchangeable relative to a specific data product. Metal or specular surfaces often generate stronger “glint” returns in radar data. This “glint” effect may exhibit less variance at different common lidar wavelength (i.e., 905 nm vs. 1550 nm). In other words, reflectivity-based features often change significantly at different wavelengths. Because the polarization ratio measurement is not so sensitive to exact wavelength, it can make lidar systems more interchangeable relative to a specific data product. Fourth, the depolarization ratio can be related to the magnitude of the angle of incidence between the lidar beam and the object. This can be helpful for determining surface normals which are commonly used in localization and mapping. In some implementations, one more datatype or piece of information on the magnitude of the angle of incident can be obtained from the depolarization ratio measurement to be able to gain benefit in localization and mapping, for example. Fifth, implementations in the present disclosure can provide useful techniques for improving other technical fields such as localization (e.g., spatial relationships between the vehicle and stationary objects), camera simulation, lidar/radar simulation, radar measurements. For example, detection of contrast in common building materials can be utilized in localization. In some implementations, the depolarization ratio can reveal or indicate whether a surface has properties of diffuse reflection or specular reflection. Surface property information on diffuse or specular reflection may be obtained based on the depolarization ratio and represented in a high definition (HD) map, so that such surface property information can be extracted by an autonomous vehicle control system. Such surface property information can be used for a camera simulation in modeling different lighting conditions. Similarly, the depolarization ratio can be utilized in an integrated lidar/radar simulation. Such surface property information can be also utilized in analyzing radar data or camera data. 1. System Environment for Autonomous Vehicles FIG.1Ais a block diagram illustrating an example of a system environment for autonomous vehicles according to some implementations. Referring toFIG.1A, an example autonomous vehicle110A within which the various techniques disclosed herein may be implemented. The vehicle110A, for example, may include a powertrain192including a prime mover194powered by an energy source196and capable of providing power to a drivetrain198, as well as a control system180including a direction control182, a powertrain control184, and a brake control186. The vehicle110A may be implemented as any number of different types of vehicles, including vehicles capable of transporting people and/or cargo, and capable of traveling in various environments, and it will be appreciated that the aforementioned components180-198can vary widely based upon the type of vehicle within which these components are utilized. For simplicity, the implementations discussed hereinafter will focus on a wheeled land vehicle such as a car, van, truck, bus, etc. In such implementations, the prime mover194may include one or more electric motors and/or an internal combustion engine (among others). The energy source may include, for example, a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, and/or a fuel cell system. The drivetrain198can include wheels and/or tires along with a transmission and/or any other mechanical drive components to convert the output of the prime mover194into vehicular motion, as well as one or more brakes configured to controllably stop or slow the vehicle110A and direction or steering components suitable for controlling the trajectory of the vehicle110A (e.g., a rack and pinion steering linkage enabling one or more wheels of the vehicle110A to pivot about a generally vertical axis to vary an angle of the rotational planes of the wheels relative to the longitudinal axis of the vehicle). In some implementations, combinations of powertrains and energy sources may be used (e.g., in the case of electric/gas hybrid vehicles), and in some instances multiple electric motors (e.g., dedicated to individual wheels or axles) may be used as a prime mover. The direction control182may include one or more actuators and/or sensors for controlling and receiving feedback from the direction or steering components to enable the vehicle110A to follow a desired trajectory. The powertrain control184may be configured to control the output of the powertrain102, e.g., to control the output power of the prime mover194, to control a gear of a transmission in the drivetrain198, etc., thereby controlling a speed and/or direction of the vehicle110A. The brake control116may be configured to control one or more brakes that slow or stop vehicle110A, e.g., disk or drum brakes coupled to the wheels of the vehicle. Other vehicle types, including but not limited to off-road vehicles, all-terrain or tracked vehicles, construction equipment etc., will necessarily utilize different powertrains, drivetrains, energy sources, direction controls, powertrain controls and brake controls. Moreover, in some implementations, some of the components can be combined, e.g., where directional control of a vehicle is primarily handled by varying an output of one or more prime movers. Therefore, implementations disclosed herein are not limited to the particular application of the herein-described techniques in an autonomous wheeled land vehicle. Various levels of autonomous control over the vehicle110A can be implemented in a vehicle control system120, which may include one or more processors122and one or more memories124, with each processor122configured to execute program code instructions126stored in a memory124. The processors(s) can include, for example, graphics processing unit(s) (“GPU(s)”)) and/or central processing unit(s) (“CPU(s)”). Sensors130may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle. For example, sensors130can include radar sensor134, lidar (Light Detection and Ranging) sensor136, a 3D positioning sensors138, e.g., any of an accelerometer, a gyroscope, a magnetometer, or a satellite navigation system such as GPS (Global Positioning System), GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema, or Global Navigation Satellite System), BeiDou Navigation Satellite System (BDS), Galileo, Compass, etc. The 3D positioning sensors138can be used to determine the location of the vehicle on the Earth using satellite signals. The sensors130can include a camera140and/or an IMU (inertial measurement unit)142. The camera140can be a monographic or stereographic camera and can record still and/or video images. The IMU142can include multiple gyroscopes and accelerometers capable of detecting linear and rotational motion of the vehicle in three directions. One or more encoders (not illustrated), such as wheel encoders may be used to monitor the rotation of one or more wheels of vehicle110A. Each sensor130can output sensor data at various data rates, which may be different than the data rates of other sensors130. The outputs of sensors130may be provided to a set of control subsystems150, including, a localization subsystem152, a planning subsystem156, a perception subsystem154, and a control subsystem158. The localization subsystem152can perform functions such as precisely determining the location and orientation (also sometimes referred to as “pose”) of the vehicle110A within its surrounding environment, and generally within some frame of reference. The location of an autonomous vehicle can be compared with the location of an additional vehicle in the same environment as part of generating labeled autonomous vehicle data. The perception subsystem154can perform functions such as detecting, tracking, determining, and/or identifying objects within the environment surrounding vehicle110A. A machine learning model can be utilized in tracking objects. The planning subsystem156can perform functions such as planning a trajectory for vehicle110A over some timeframe given a desired destination as well as the static and moving objects within the environment. A machine learning can be utilized in planning a vehicle trajectory. The control subsystem158can perform functions such as generating suitable control signals for controlling the various controls in the vehicle control system120in order to implement the planned trajectory of the vehicle110A. A machine learning model can be utilized to generate one or more signals to control an autonomous vehicle to implement the planned trajectory. It will be appreciated that the collection of components illustrated inFIG.1Afor the vehicle control system120is merely exemplary in nature. Individual sensors may be omitted in some implementations. Additionally or alternatively, in some implementations, multiple sensors of types illustrated inFIG.1Amay be used for redundancy and/or to cover different regions around a vehicle, and other types of sensors may be used. Likewise, different types and/or combinations of control subsystems may be used in other implementations. Further, while subsystems152-158are illustrated as being separate from processor122and memory124, it will be appreciated that in some implementations, some or all of the functionality of a subsystem152-158may be implemented with program code instructions126resident in one or more memories124and executed by one or more processors122, and that these subsystems152-158may in some instances be implemented using the same processor(s) and/or memory. Subsystems may be implemented at least in part using various dedicated circuit logic, various processors, various field programmable gate arrays (“FPGA”), various application-specific integrated circuits (“ASIC”), various real time controllers, and the like, as noted above, multiple subsystems may utilize circuitry, processors, sensors, and/or other components. Further, the various components in the vehicle control system120may be networked in various manners. In some implementations, the vehicle110A may also include a secondary vehicle control system (not illustrated), which may be used as a redundant or backup control system for the vehicle110A. The secondary vehicle control system may be capable of fully operating the autonomous vehicle110A in the event of an adverse event in the vehicle control system120, while in other implementations, the secondary vehicle control system may only have limited functionality, e.g., to perform a controlled stop of the vehicle110A in response to an adverse event detected in the primary vehicle control system120. In still other implementations, the secondary vehicle control system may be omitted. In general, an innumerable number of different architectures, including various combinations of software, hardware, circuit logic, sensors, networks, etc. may be used to implement the various components illustrated inFIG.1AEach processor may be implemented, for example, as a microprocessor and each memory may represent the random access memory (“RAM”) devices comprising a main storage, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc. In addition, each memory may be considered to include memory storage physically located elsewhere in the vehicle110A, e.g., any cache memory in a processor, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device or another computer controller. One or more processors illustrated inFIG.1A, or entirely separate processors, may be used to implement additional functionality in the vehicle110A outside of the purposes of autonomous control, e.g., to control entertainment systems, to operate doors, lights, convenience features, etc. In addition, for additional storage, the vehicle110A may include one or more mass storage devices, e.g., a removable disk drive, a hard disk drive, a direct access storage device (“DASD”), an optical drive (e.g., a CD drive, a DVD drive, etc.), a solid state storage drive (“SSD”), network attached storage, a storage area network, and/or a tape drive, among others. Furthermore, the vehicle110A may include a user interface164to enable vehicle110A to receive a number of inputs from and generate outputs for a user or operator, e.g., one or more displays, touchscreens, voice and/or gesture interfaces, buttons and other tactile controls, etc. Otherwise, user input may be received via another computer or electronic device, e.g., via an app on a mobile device or via a web interface. Moreover, the vehicle110A may include one or more network interfaces, e.g., network interface162, suitable for communicating with one or more networks170(e.g., a Local Area Network (“LAN”), a wide area network (“WAN”), a wireless network, and/or the Internet, among others) to permit the communication of information with other computers and electronic device, including, for example, a central service, such as a cloud service, from which the vehicle110A receives environmental and other data for use in autonomous control thereof. Data collected by the one or more sensors130can be uploaded to a computing system172via the network170for additional processing. A time stamp can be added to each instance of vehicle data prior to uploading. Additional processing of autonomous vehicle data by computing system172in accordance with many implementations is described with respect toFIG.2. Each processor illustrated inFIG.1A, as well as various additional controllers and subsystems disclosed herein, generally operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc., as will be described in greater detail below. Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to vehicle110A via network170, e.g., in a distributed, cloud-based, or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers and/or services over a network. In general, the routines executed to implement the various implementations described herein, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, will be referred to herein as “program code”. Program code can include one or more instructions that are resident at various times in various memory and storage devices, and that, when read and executed by one or more processors, perform the steps necessary to execute steps or elements embodying the various aspects of the present disclosure. Moreover, while implementations have and hereinafter will be described in the context of fully functioning computers and systems, it will be appreciated that the various implementations described herein are capable of being distributed as a program product in a variety of forms, and that implementations can be implemented regardless of the particular type of computer readable media used to actually carry out the distribution. Examples of computer readable media include tangible, non-transitory media such as volatile and non-volatile memory devices, floppy and other removable disks, solid state drives, hard disk drives, magnetic tape, and optical disks (e.g., CD-ROMs, DVDs, etc.) among others. In addition, various program code described hereinafter may be identified based upon the application within which it is implemented in a specific implementation. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the present disclosure should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API's, applications, applets, etc.), it should be appreciated that the present disclosure is not limited to the specific organization and allocation of program functionality described herein. The environment illustrated inFIG.1Ais not intended to limit implementations disclosed herein. Indeed, other alternative hardware and/or software environments may be used without departing from the scope of implementations disclosed herein. 2. FM LIDAR for Automotive Applications A truck can include a lidar system (e.g., vehicle control system120inFIG.1A, lidar system300inFIG.3A, lidar system350inFIG.3B, etc.). In some implementations, the lidar system can use frequency modulation to encode an optical signal and scatter the encoded optical signal into free-space using optics. By detecting the frequency differences between the encoded optical signal and a returned signal reflected back from an object, the frequency modulated (FM) lidar system can determine the location of the object and/or precisely measure the velocity of the object using the Doppler effect. An FM lidar system may use a continuous wave (referred to as, “FMCW lidar” or “coherent FMCW lidar”) or a quasi-continuous wave (referred to as, “FMQW lidar”). The lidar system can use phase modulation (PM) to encode an optical signal and scatters the encoded optical signal into free-space using optics. An FM or phase-modulated (PM) lidar system may provide substantial advantages over conventional lidar systems with respect to automotive and/or commercial trucking applications. To begin, in some instances, an object (e.g., a pedestrian wearing dark clothing) may have a low reflectivity, in that it only reflects back to the sensors (e.g., sensors130inFIG.1A) of the FM or PM lidar system a low amount (e.g., 10% or less) of the light that hit the object. In other instances, an object (e.g., a shiny road sign) may have a high reflectivity (e.g., above 10%), in that it reflects back to the sensors of the FM lidar system a high amount of the light that hit the object. Regardless of the object's reflectivity, an FM lidar system may be able to detect (e.g., classify, recognize, discover, etc.) the object at greater distances (e.g., 2×) than a conventional lidar system. For example, an FM lidar system may detect a low reflectivity object beyond 300 meters, and a high reflectivity object beyond 400 meters. To achieve such improvements in detection capability, the FM lidar system may use sensors (e.g., sensors130inFIG.1A). In some implementations, these sensors can be single photon sensitive, meaning that they can detect the smallest amount of light possible. While an FM lidar system may, in some applications, use infrared wavelengths (e.g., 950 nm, 1550 nm, etc.), it is not limited to the infrared wavelength range (e.g., near infrared: 800 nm-1500 nm; middle infrared: 1500 nm-5600 nm; and far infrared: 5600 nm-1,000,000 nm). By operating the FM or PM lidar system in infrared wavelengths, the FM or PM lidar system can broadcast stronger light pulses or light beams while meeting eye safety standards. Conventional lidar systems are often not single photon sensitive and/or only operate in near infrared wavelengths, requiring them to limit their light output (and distance detection capability) for eye safety reasons. Thus, by detecting an object at greater distances, an FM lidar system may have more time to react to unexpected obstacles. Indeed, even a few milliseconds of extra time could improve safety and comfort, especially with heavy vehicles (e.g., commercial trucking vehicles) that are driving at highway speeds. Another advantage of an FM lidar system is that it provides accurate velocity for each data point instantaneously. In some implementations, a velocity measurement is accomplished using the Doppler effect which shifts frequency of the light received from the object based at least one of the velocity in the radial direction (e.g., the direction vector between the object detected and the sensor) or the frequency of the laser signal. For example, for velocities encountered in on-road situations where the velocity is less than 100 meters per second (m/s), this shift at a wavelength of 1550 nanometers (nm) amounts to the frequency shift that is less than 130 megahertz (MHz). This frequency shift is small such that it is difficult to detect directly in the optical domain. However, by using coherent detection in FMCW, PMCW, or FMQW lidar systems, the signal can be converted to the RF domain such that the frequency shift can be calculated using various signal processing techniques. This enables the autonomous vehicle control system to process incoming data faster. Instantaneous velocity calculation also makes it easier for the FM lidar system to determine distant or sparse data points as objects and/or track how those objects are moving over time. For example, an FM lidar sensor (e.g., sensors130inFIG.1A) may only receive a few returns (e.g., hits) on an object that is 300 m away, but if those return give a velocity value of interest (e.g., moving towards the vehicle at >70 mph), then the FM lidar system and/or the autonomous vehicle control system may determine respective weights to probabilities associated with the objects. Faster identification and/or tracking of the FM lidar system gives an autonomous vehicle control system more time to maneuver a vehicle. A better understanding of how fast objects are moving also allows the autonomous vehicle control system to plan a better reaction. Another advantage of an FM lidar system is that it has less static compared to conventional lidar systems. That is, the conventional lidar systems that are designed to be more light-sensitive typically perform poorly in bright sunlight. These systems also tend to suffer from crosstalk (e.g., when sensors get confused by each other's light pulses or light beams) and from self-interference (e.g., when a sensor gets confused by its own previous light pulse or light beam). To overcome these disadvantages, vehicles using the conventional lidar systems often need extra hardware, complex software, and/or more computational power to manage this “noise.” In contrast, FM lidar systems do not suffer from these types of issues because each sensor is specially designed to respond only to its own light characteristics (e.g., light beams, light waves, light pulses). If the returning light does not match the timing, frequency, and/or wavelength of what was originally transmitted, then the FM sensor can filter (e.g., remove, ignore, etc.) out that data point. As such, FM lidar systems produce (e.g., generates, derives, etc.) more accurate data with less hardware or software requirements, enabling safer and smoother driving. Lastly, an FM lidar system is easier to scale than conventional lidar systems. As more self-driving vehicles (e.g., cars, commercial trucks, etc.) show up on the road, those powered by an FM lidar system likely will not have to contend with interference issues from sensor crosstalk. Furthermore, an FM lidar system uses less optical peak power than conventional lidar sensors. As such, some or all of the optical components for an FM lidar can be produced on a single chip, which produces its own benefits, as discussed herein. 3. Commercial Trucking FIG.1Bis a block diagram illustrating an example of a system environment for autonomous commercial trucking vehicles, according to some implementations. The environment100B includes a commercial truck102B for hauling cargo106B. In some implementations, the commercial truck102B may include vehicles configured to long-haul freight transport, regional freight transport, intermodal freight transport (i.e., in which a road-based vehicle is used as one of multiple modes of transportation to move freight), and/or any other road-based freight transport applications. The commercial truck102B may be a flatbed truck, a refrigerated truck (e.g., a reefer truck), a vented van (e.g., dry van), a moving truck, etc. The cargo106B may be goods and/or produce. The commercial truck102B may include a trailer to carry the cargo106B, such as a flatbed trailer, a lowboy trailer, a step deck trailer, an extendable flatbed trailer, a sidekit trailer, etc. The environment100B includes an object110B (shown inFIG.1Bas another vehicle) that is within a distance range that is equal to or less than 30 meters from the truck. The commercial truck102B may include a lidar system104B (e.g., an FM lidar system, vehicle control system120inFIG.1A, lidar system300inFIG.3A, lidar system350inFIG.3B, etc.) for determining a distance to the object110B and/or measuring the velocity of the object110B. AlthoughFIG.1Bshows that one lidar system104B is mounted on the front of the commercial truck102B, the number of lidar system and the mounting area of the lidar system on the commercial truck are not limited to a particular number or a particular area. The commercial truck102B may include any number of lidar systems104B (or components thereof, such as sensors, modulators, coherent signal generators, etc.) that are mounted onto any area (e.g., front, back, side, top, bottom, underneath, and/or bottom) of the commercial truck102B to facilitate the detection of an object in any free-space relative to the commercial truck102B. As shown, the lidar system104B in environment100B may be configured to detect an object (e.g., another vehicle, a bicycle, a tree, street signs, potholes, etc.) at short distances (e.g., 30 meters or less) from the commercial truck102B. FIG.1Cis a block diagram illustrating an example of a system environment for autonomous commercial trucking vehicles, according to some implementations. The environment100C includes the same components (e.g., commercial truck102B, cargo106B, lidar system104B, etc.) that are included in environment100B. The environment100C includes an object110C (shown inFIG.1Cas another vehicle) that is within a distance range that is (i) more than 30 meters and (ii) equal to or less than 150 meters from the commercial truck102B. As shown, the lidar system104B in environment100C may be configured to detect an object (e.g., another vehicle, a bicycle, a tree, street signs, potholes, etc.) at a distance (e.g., 100 meters) from the commercial truck102B. FIG.1Dis a block diagram illustrating an example of a system environment for autonomous commercial trucking vehicles, according to some implementations. The environment100D includes the same components (e.g., commercial truck102B, cargo106B, lidar system104B, etc.) that are included in environment100B. The environment100D includes an object110D (shown inFIG.1Das another vehicle) that is within a distance range that is more than 150 meters from the commercial truck102B. As shown, the lidar system104B in environment100D may be configured to detect an object (e.g., another vehicle, a bicycle, a tree, street signs, potholes, etc.) at a distance (e.g., 300 meters) from the commercial truck102B. In commercial trucking applications, it is important to effectively detect objects at all ranges due to the increased weight and, accordingly, longer stopping distance required for such vehicles. FM lidar systems (e.g., FMCW and/or FMQW systems) or PM lidar systems are well-suited for commercial trucking applications due to the advantages described above. As a result, commercial trucks equipped with such systems may have an enhanced ability to safely move both people and goods across short or long distances, improving the safety of not only the commercial truck but of the surrounding vehicles as well. In various implementations, such FM or PM lidar systems can be used in semi-autonomous applications, in which the commercial truck has a driver and some functions of the commercial truck are autonomously operated using the FM or PM lidar system, or fully autonomous applications, in which the commercial truck is operated entirely by the FM or lidar system, alone or in combination with other vehicle systems. 4. Continuous Wave Modulation and Quasi-Continuous Wave Modulation In a lidar system that uses CW modulation, the modulator modulates the laser light continuously. For example, if a modulation cycle is 10 seconds, an input signal is modulated throughout the whole 10 seconds. Instead, in a lidar system that uses quasi-CW modulation, the modulator modulates the laser light to have both an active portion and an inactive portion. For example, for a 10 second cycle, the modulator modulates the laser light only for 8 seconds (sometimes referred to as, “the active portion”), but does not modulate the laser light for 2 seconds (sometimes referred to as, “the inactive portion”). By doing this, the lidar system may be able to reduce power consumption for the 2 seconds because the modulator does not have to provide a continuous signal. In Frequency Modulated Continuous Wave (FMCW) lidar for automotive applications, it may be beneficial to operate the lidar system using quasi-CW modulation where FMCW measurement and signal processing methodologies are used, but the light signal is not in the on-state (e.g., enabled, powered, transmitting, etc.) all the time. In some implementations, Quasi-CW modulation can have a duty cycle that is equal to or greater than 1% and up to 50%. If the energy in the off-state (e.g., disabled, powered-down, etc.) can be expended during the actual measurement time then there may be a boost to signal-to-noise ratio (SNR) and/or a reduction in signal processing requirements to coherently integrate all the energy in the longer time scale. 5. A LIDAR System Using a Depolarization Ratio of a Return Signal FIG.2is a block diagram illustrating an example of a computing system according to some implementations. Referring toFIG.2, the illustrated example computing system172includes one or more processors210in communication, via a communication system240(e.g., bus), with memory260, at least one network interface controller230with network interface port for connection to a network (not shown), and other components, e.g., an input/output (“I/O”) components interface450connecting to a display (not illustrated) and an input device (not illustrated). Generally, the processor(s)210will execute instructions (or computer programs) received from memory. The processor(s)210illustrated incorporate, or are directly connected to, cache memory220. In some instances, instructions are read from memory260into the cache memory220and executed by the processor(s)210from the cache memory220. In more detail, the processor(s)210may be any logic circuitry that processes instructions, e.g., instructions fetched from the memory260or cache220. In some implementations, the processor(s)210are microprocessor units or special purpose processors. The computing device400may be based on any processor, or set of processors, capable of operating as described herein. The processor(s)210may be single core or multi-core processor(s). The processor(s)210may be multiple distinct processors. The memory260may be any device suitable for storing computer readable data. The memory260may be a device with fixed storage or a device for reading removable storage media. Examples include all forms of non-volatile memory, media and memory devices, semiconductor memory devices (e.g., EPROM, EEPROM, SDRAM, and flash memory devices), magnetic disks, magneto optical disks, and optical discs (e.g., CD ROM, DVD-ROM, or Blu-Ray® discs). A computing system172may have any number of memory devices as the memory260. The cache memory220is generally a form of computer memory placed in close proximity to the processor(s)210for fast read times. In some implementations, the cache memory220is part of, or on the same chip as, the processor(s)210. In some implementations, there are multiple levels of cache220, e.g., L2 and L3 cache layers. The network interface controller230manages data exchanges via the network interface (sometimes referred to as network interface ports). The network interface controller230handles the physical and data link layers of the OSI model for network communication. In some implementations, some of the network interface controller's tasks are handled by one or more of the processor(s)210. In some implementations, the network interface controller230is part of a processor210. In some implementations, a computing system172has multiple network interfaces controlled by a single controller230. In some implementations, a computing system172has multiple network interface controllers230. In some implementations, each network interface is a connection point for a physical network link (e.g., a cat-5 Ethernet link). In some implementations, the network interface controller230supports wireless network connections and an interface port is a wireless (e.g., radio) receiver/transmitter (e.g., for any of the IEEE 802.11 protocols, near field communication “NFC”, Bluetooth, ANT, or any other wireless protocol). In some implementations, the network interface controller230implements one or more network protocols such as Ethernet. Generally, a computing device172exchanges data with other computing devices via physical or wireless links through a network interface. The network interface may link directly to another device or to another device via an intermediary device, e.g., a network device such as a hub, a bridge, a switch, or a router, connecting the computing device172to a data network such as the Internet. The computing system172may include, or provide interfaces for, one or more input or output (“I/O”) devices. Input devices include, without limitation, keyboards, microphones, touch screens, foot pedals, sensors, MIDI devices, and pointing devices such as a mouse or trackball. Output devices include, without limitation, video displays, speakers, refreshable Braille terminal, lights, MIDI devices, and 2-D or 3-D printers. Other components may include an I/O interface, external serial device ports, and any additional co-processors. For example, a computing system172may include an interface (e.g., a universal serial bus (USB) interface) for connecting input devices, output devices, or additional memory devices (e.g., portable flash drive or external media drive). In some implementations, a computing device172includes an additional device such as a co-processor, e.g., a math co-processor can assist the processor210with high precision or complex calculations. FIG.3Ais a block diagram illustrating an example of a lidar system according to some implementations. In some implementations, a lidar system300may be the lidar sensor136(seeFIG.1A). The lidar system300may include a laser302, a modulator304, circulator optics306, a scanner308, a polarization beam splitter (PBS)312, a first detector314, a second detector316, and a processing system318. The PBS may be a polarization beam splitters/combiners (PBSC). In some implementations, the lidar system300may be a coherent FMCW lidar. In some implementations, the laser302may emit a laser output (LO) signal as carrier wave303. A splitter (not illustrated) may split the unmodulated LO signal into the carrier wave303, an LO signal321, and an LO signal323, which are in the same polarization state (referred to as first polarization state). In some implementations, the modulator304may receive the carrier wave303and phase or frequency modulate the carrier wave303to produce a modulated optical signal305which is in the first polarization state. The modulator304may be a frequency shifting device (acousto-optic modulator). The modulated optical signal305may be generated using time delay of a local oscillator waveform modulation. The modulator304may use frequency modulation (FM) so that the (FM) lidar system can encode an optical signal and scatter the encoded optical signal into free-space using optics. The FM lidar system may use a continuous wave (referred to as, “FMCW lidar”) or a quasi-continuous wave (referred to as, “FMQW lidar”). The modulator304may use phase modulation (PM) so that the (PM) lidar system can encode an optical signal and scatter the encoded optical signal into free-space using optics. In some implementations, the modulator304may use polarization modulation. In some implementations, the modulated optical signal305may be shaped through the circulator optics306into a signal307which is input to the scanner308. The circulator optics306may be an optical circulator (e.g., a fiber coupled circulator) but is not limited thereto. For example, the circulator optics306may be an optical isolator. In some implementations, the circulator optics306may be a free-space pitch catch optic (e.g., a pitch optic and a catch optic). In some implementations, a transmit signal309may be transmitted through the scanner308to illuminate an object310(or an area of interest). The transmit signal309may be in the first polarization state. In some implementation, the scanner308may include scanning optics (not shown), for example, a polygon scanner with a plurality of mirrors or facets). The scanner308may receive a return signal311reflected by the object310. The return signal311may contain a signal portion in the first polarization state and/or a signal portion in a different polarization state (referred to as “second polarization state”). The first polarization state is orthogonal to the second polarization state. The scanner308may redirect the return signal311into a return signal313. In some implementations, the return signal313may be further redirected by the circulator optics306into a return signal315to be input to the PBS312. The PBS312may split and polarize the return signal315into a first polarized optical signal317with the first polarization state and a second polarized optical signal319with the second polarization state. The circulator optics306and the PBS312may be integrated into a polarization beam splitter/combiner (PBSC). In some implementations, the first detector314may be a single paired or unpaired detector, or a 1 dimensional (1D) or 2 dimensional (2D) array of paired or unpaired detectors. The first detector314may receive the LO signal321as a reference signal. The first detector314may be an optical detector configured to detect an optical signal. The first detector314may detect a first polarized signal (e.g., the first polarized optical signal) and output or generate a first electrical signal325. In some implementations, the second detector316may be a single paired or unpaired detector, or a 1 dimensional (1D) or 2 dimensional (2D) array of paired or unpaired detectors. The second detector316may receive the LO signal323as a reference signal. The second detector316may be an optical detector configured to detect an optical signal. The second detector316may detect a second polarized signal (e.g., the second polarized optical signal319) and output or generate a second electrical signal327. The two electrical signals may indicate respective separate images of the object in different polarization states. In some implementations, the processing system318may have similar configuration to the computing system172(seeFIG.2). The processing system318may be one or more computing systems each having similar configuration to the computing system172. The processing system318may receive the first electrical signal325and the second electrical signal327and calculate depolarization ratios (as defined below) based on the electrical signals. The first electrical signal325represents an image of the object in a first channel representing the first polarization state, and the second electrical signal347represents an image of the object in a second channel representing the second polarization state. The processing system318may process the first electrical signal325in the first channel and the second electrical signal327in the second channel. In some implementations, the processing system318(or a processor thereof) may be configured to compare the signals with the two polarization states (e.g., the electrical signals325,327) to estimate how much the object (or target) has depolarized the return signal (by calculating a depolarization ratio as defined below). In response to detecting two polarized signals, one or more detectors (e.g., the first and second detectors314,316) may generate corresponding two electrical signals (e.g., electrical signals325,327inFIG.3A) in separate channels so that the electrical signals can be processed independently by the processing system318. The lidar system (e.g., the lidar system300) may provide two receive/digitizer channels (e.g., the first and second channels) for performing independent signal processing of two beams with different polarization states (e.g., first polarized optical signal317and second polarized optical signal319inFIG.3A). In this manner, the system can process, in two independent channels, independent streams from a point of cloud. In some implementations, the processing system318(or a processor thereof) may be configured to calculate a depolarization ratio by calculating a ratio of reflectivity between the first polarized optical signal317and the second polarized optical signal319. The lidar system may calculate a depolarization ratio by calculating a ratio of reflectivity between separate images of the object in different polarization states (e.g., images represented by the first and second electrical signals). The lidar system may calculate a depolarization ratio based on the equation below. Depolan•zat•wnRat•w=SNR(channel_p)SNR(channel_d)(Equation1) where SNR(channel_p) is a signal-to-noise ratio (SNR) of an image of the object in a polarization channel (e.g., the first channel representing the first polarization state), and SNR(channel_d) is an SNR of an image of the object in a depolarization channel (e.g., the second channel representing the second polarization state). In some implementations, SNR(channel_p) is an average SNR of the image of the object in the polarization channel, and SNR(channel_d) is an average SNR of the image of the object in the depolarization channel. In some implementations, the depolarization ratio may be calculated using a ratio of average reflectivity between two images in different polarization states, where an average reflectivity is defined over some region of a space (e.g., over some voxels). An average reflectivity of a plurality of samples may be calculated over one or more voxels. An average reflectivity of a plurality of samples over one or more voxels may be affected by selection of parameters, for example, spatial resolution or precision (e.g., dimension of a voxel). In general, an average over a smaller voxel may have fewer contributions to an average over the whole image of the object, while it may have more contribution to contrast of the image so as to make the object more distinguishable. Selection of proper parameters (e.g., dimension of a voxel, number of samples, etc.) to calculate an average reflectivity can provide another dimension from pure reflectivity measurement, thereby increasing the value of data. Characteristics of the calculated average reflectivity may be different from those of measured values of reflectivity. For example, each measurement value is not correlated while an average may be correlated. In some implementations, a voxel of 5 cm×10 cm dimension may be used. The number of samples for averaging may be less than 100. For example, 5 or 12 samples may be used. In some implementations, the processing system318may perform a post processing on the electrical signals325and327in each channel to produce a respective image of the object in the respective polarization (e.g., a first image in the first polarization state and a second image in the second polarization state). The processing system318may calculate an average reflectivity of each image of the object over a plurality of samples. The processing system318may perform a spatial averaging per voxel. For example, the processing system318may include a hash-based voxelizer to efficiently generate a plurality of voxels representing the object in a polarization state. With a hash-based voxelizer, the processing system318can perform a quick search for a voxel. The processing system318may calculate an average reflectivity within each voxel of the plurality of voxels over a plurality of samples. The processing system318may calculate a ratio of the average reflectivity within each voxel between two channels (e.g., the electrical signal325in the first channel and the electrical signal327in the second channel). In some implementations, a light detection and ranging (lidar) system may include a transmitter (e.g., scanner308inFIG.3A) configured to transmit a transmit signal (e.g., transmit signal311inFIG.3A) from a laser source (e.g., laser302inFIG.3A), a receiver (e.g., scanner308inFIG.3A) configured to receive a return signal (e.g., return signal313inFIG.3A) reflected by an object (e.g., object310inFIG.3A), one or more optics (e.g., circulator optics306, PBS312inFIG.3A), and a processor (e.g., a processor of the processing system318inFIG.3A). The one or more optics may be configured to generate a first polarized signal of the return signal (e.g., first polarized optical signal317inFIG.3A) with a first polarization, and generate a second polarized signal of the return signal (e.g., second polarized optical signal319inFIG.3A) with a second polarization that is orthogonal to the first polarization. The transmitter and the receiver may be a single transceiver (e.g., a single scanner308inFIG.3A). In some implementations, the one or more optics may include a polarization beam splitter (e.g., PBS312inFIG.3A), a first detector (e.g., detector314inFIG.3A), and a second detector (e.g., detector316inFIG.3A), thereby providing a polarization-sensitive lidar (e.g., lidar system300inFIG.3A). The PBS may be configured to split out the return signal into the first polarized signal and the second polarized signal so that they can be detected independently. The PBS may be configured to polarize the return signal with the first polarization to generate the first polarized signal (e.g., first polarized optical signal317inFIG.3A), and polarize the return signal with the second polarization to generate a second polarized signal (e.g., second polarized optical signal319inFIG.3A). The first detector may be configured to detect the first polarized signal. The second detector may be configured to detect the second polarized signal. In some implementations, the signals split into the two polarization states can then be compared (by calculating a ratio of reflectivity between them) to estimate how much the object (or target) has depolarized the return signal. In response to detecting two polarized signals, one or more detectors may generate corresponding two electrical signals (e.g., electrical signals325,327inFIG.3A) in separate channels so that the electrical signals can be processed independently by a processing system (e.g., processing system318inFIG.3A). A system may have two beams of polarization sensitive lidar (e.g., first polarized optical signal317and second polarized optical signal319inFIG.3A), each of which has two receive/digitizer channels (e.g., the first and second channels) for independent signal processing (e.g., signal processing by the processing system318inFIG.3A). The system may process, in two independent channels, independent streams from a point of cloud. FIG.3Bis a block diagram illustrating another example of a lidar system according to some implementations. In some implementations, a lidar system350may be the lidar sensor136(seeFIG.1A). Referring toFIG.3B, each of the laser302, the modulator304, the circulator optics306, the scanner308, and the polarization beam splitter (PBS)312of the lidar system350may have the same configurations as or similar configurations to those described with reference toFIG.3A. The lidar system350may further include a detector354, a shifter356, and a processing system368. In some implementations, the detector354may be a single paired or unpaired detector. The first detector354may receive an LO signal355from the laser302as a reference signal. The detector354may be an optical detector configured to detect an optical signal. The detector354may detect the first polarized optical signal317and output or generate a first electrical signal359. In some implementations, the shifter356may be a frequency shifter or a frequency shifting device. For example, the shifter356may be an acousto-optic frequency shifter. The shifter356may receive the second polarized optical signal319and generate a frequency-shifted optical signal357which is directed to the detector354. The detector354may the frequency-shifted optical signal357and output or generate a second electrical signal361. In some implementations, the processing system368may have similar configuration to the processing system318(seeFIG.3A). The processing system368may receive the first electrical signal359and the second electrical signal361and calculate depolarization ratios (as defined above) based on the electrical signals. The first electrical signal359represents an image of the object in a first channel representing the first polarization state, and the second electrical signal361represents an image of the object in a second channel representing the second polarization state. The processing system368may process the first electrical signal359in the first channel and the second electrical signal361in the second channel. In some implementations, a lidar system (e.g., the lidar system350inFIG.3B) may include a single detector (e.g., the single detector354inFIG.3B) configured to detect two polarized signals (e.g., the first and second polarized optical signals317,319) from the return signal (e.g., signals311,313,315) using a splitter (e.g., PBS312) and a phase shifter (e.g., shifter356inFIG.3B) by multiplexing the return signal into the single detector. The phase shifter (e.g., shifter356inFIG.3B) may be configured to shift a phase of the second polarized signal (e.g., the second polarized optical signal319). The single detector may be configured to detect the first polarized signal (e.g., the second polarized optical signal317), and detect the phase-shifted second polarized signal (e.g., the second polarized optical signal319). FIG.3AandFIG.3Bshow examples of coherent lidar systems which can locate objects by mixing light reflected back from the objects with light from a local oscillator (LO). The present disclosure is not limited thereto, and in some implementations, direct detect (or pulsed) lidar systems can be used to calculate depolarization ratios. In some implementations, hardware configuration of the direct detect lidar systems for a polarimetric/depolarization ratio measurement may be different from that of coherent lidar systems. For example, a direct detect lidar system may need to specifically polarize an outgoing pulse and add additional optics to make a measurement specific to different polarization states of the return signal. With the additional optics, the direct detect lidar system can perform polarization of the return signal. In coherent lidar systems, on the other hand, the measured signal may be inherently the part of the return signal in the same polarization state of a local oscillator (LO). FIG.4AtoFIG.4Jare images illustrating various examples of depolarization ratio data according to some implementations. FIG.4Ashows an image representing a color scheme401used in depolarization ratio images shown inFIG.4BtoFIG.4J. According to the color scheme, more red colors indicate that the object is more polarizing maintaining (i.e., depolarization ratio is closer to 0), while more light blue colors indicate that the object is more depolarizing (i.e., depolarization ratio is closer to 1). FIG.4Bshows a color image411representing an original scene, an image412representing a reflectivity of the original scene, and an image413representing a depolarization ratio of the original scene.FIG.4Bshows that an asphalt road and gravel have similar reflectivity so are not clearly distinguished in the reflectivity image412, while the asphalt road414and the gravel415have different depolarization ratios so are clearly distinguished in the depolarization ratio image413. It is also shown inFIG.4Bthat according to the color scheme ofFIG.4A, the asphalt road414is more polarization maintaining while the gravel415is more depolarizing. FIG.4Cshows a color image421representing an original scene, an image422representing a reflectivity of the original scene, and an image423representing a depolarization ratio of the original scene.FIG.4Cshows that an asphalt road and lane markings have similar reflectivity so are not clearly distinguished in the reflectivity image422, while the asphalt road424and the lane markings425have different depolarization ratios so are clearly distinguished in the depolarization ratio image423. It is also shown inFIG.4Cthat according to the color scheme ofFIG.4A, the asphalt road424is more polarization maintaining while the lane markings425is more depolarizing. FIG.4Dshows a color image431representing an original scene and an image432representing a depolarization ratio of the original scene.FIG.4Dshows that an asphalt road433is more polarization maintaining while grass434is more depolarizing, which can be useful for detecting a pavement.FIG.4Dalso shows that a plant435is more polarization maintaining while a plant436is more depolarizing. It was discovered by the inventors that such a low depolarization ratio of the plant435is falsely calculated due to a split-pixel processing artifact problem. In general, sparse objects or targets (e.g., bushes, tree branches, etc.) may likely produce a false measurement due to split-pixel processing artifacts. To address this split-pixel processing artifact problem, in some implementations, such sparse object may be detected and discarded using a principal component analysis (PCA). A system (e.g., the processing system318,358inFIG.3AandFIG.3B) may calculate or obtain a distribution of points in a voxel. The system may perform PCA to determine, based on the point distributions, whether the points in a plane are either (1) flat, spread out, or solid or (2) sparse or linearly arranged. In response to determining that the points in a plane are (2) sparse or linearly arranged, the system may discard the points from measurements or calculation of depolarization ratios. In performing PCA, the system may identify eigenvalues from a covariance matrix of the distribution and determine the sparsity of the object (or points) based on some number of largest eigenvalues. FIG.4Eshows images441and442each representing a depolarization ratio of a street scene. In the image441, it is shown that a metal back of stop sign444and its post are more polarization maintaining while a street sign (or signage)443above them is more depolarizing. In the image442, it is shown that an asphalt road445(e.g., black asphalt road) is more polarization maintaining while lane markings446(e.g., white paint on the road) is more depolarizing. FIG.4Fshows images451and452each representing a depolarization ratio of a street scene. It is shown that a sign post454(in the image451) and a sign post457(in the image452) are more polarization maintaining while a front of a sign456(in the image452) is more depolarizing. It is also shown that an asphalt road455(in the image452) is more polarization maintaining while grass453(in the image451) and concrete sidewalk or curb458(in the image452) are more depolarizing. FIG.4Gshows a color image461representing an original scene and an image462representing a depolarization ratio of the original scene. It is shown in the image462that a metal post467) is more polarization maintaining while a front of a sign465is more depolarizing. It is also shown that an asphalt road463is more polarization maintaining while concrete sidewalk or curb464is more depolarizing. It is also shown that a license plate466of a car is more depolarizing while other surfaces of the car (e.g., headlights) are more polarization maintaining. FIG.4Hshows a color image470representing an original scene and images471,472each representing a depolarization ratio of the original scene. It is shown that a sign post474and a back of its sign (in the image471), a sign post478(in the image472) and chain link fence477(in the image472) are more polarization maintaining while a front of a sign479(in the image472) is more depolarizing. It is also shown that and trees473,475and grass476(in the image472) are more depolarizing. FIG.4Ishows a color image481representing an original scene and an image482representing a depolarization ratio of the original scene. It is shown in the image482that different building materials (e.g., building surfaces483,484,485) have difference depolarization ratios. FIG.4Jshows images491and492each representing a depolarization ratio of an image of people. It is shown that skin (e.g., faces494,497, arm493,496) is more polarization maintaining while hair499and clothes495,498are more depolarizing. Similar depolarization ratios are obtained from an image of animals. Based on the observations fromFIG.4AtoFIG.4J, a type of object may be determined or detected based on depolarization ratios. In some implementations, object detection based on a depolarization ratio can disambiguate several key surfaces of an object. For example, (1) asphalt is more polarization maintaining while grass, rough concrete, or gravel are more depolarizing; (2) metal poles are more polarization maintaining while trees, telephone poles, or utility poles are more depolarizing; (3) metal surfaces (or a back surface of signage) are more polarization maintaining while retro-signs (or a front surface of signage) are more depolarizing; (4) road surfaces are more polarization maintaining while lane markings are more depolarizing; (5) vehicle license plates are more depolarizing while other surfaces of vehicle are more polarization maintaining; and (6) skin (of people or animals) is more polarization maintaining while hair and clothes are more depolarizing. This disambiguation techniques can help recognize certain road markings, signs, pedestrians, etc. In some implementations, the depolarization ratio can detect or recognize sparse features of an object (e.g., features having a relatively smaller region), so that such sparse features can be easily registered for disambiguation of different objects. For example, sparse features of a person (e.g., features of skin detected based on low depolarization ratio) can be utilized to detect or recognize a pedestrian. In some implementations, disambiguation of different objects or materials based on the depolarization ratio can help a perception system (e.g., the perception subsystem154of the vehicle control system120inFIG.1A) or a planning system (e.g., the planning subsystem156of the vehicle control system120inFIG.1A) to more accurately detect, track, determine, and/or classify objects within the environment surrounding the vehicle (e.g., using artificial intelligence techniques). The perception subsystem154may calculate depolarization ratios of an image of an object obtained from a lidar system (e.g., the lidar systems shown inFIGS.3A and3B), or receive depolarization ratios calculated by the lidar system. The perception subsystem154may classify the object based on (1) the obtained depolarization ratios and (2) relationship between key feature surfaces (e.g., asphalt, grass, rough concrete, gravel, metal poles, trees, telephone poles, utility poles, sign surfaces, lane markings, vehicle surfaces, license plate, skin, hair, etc.) and their depolarization ratios as described above. Machine learning models or techniques can be utilized in classifying objects. Such machine learning models or techniques may include, but not limited to, supervised learning, unsupervised learning, semi-supervised learning, regression algorithms, instance-based algorithms, regularization algorithms, decision tree algorithms, Bayesian algorithms, clustering algorithms, artificial neural networks, deep learning algorithms. dimension reduction algorithms (e.g., PCA), ensemble algorithms, support vector machines (SVM), and so on. Referring toFIG.1A, the perception subsystem154may perform, based on results of the object classification, functions such as detecting, tracking, determining, and/or identifying objects within the environment surrounding vehicle110A. The planning subsystem156can perform, based on results of the object classification, functions such as planning a trajectory for vehicle110A over some timeframe given a desired destination as well as the static and moving objects within the environment. The control subsystem158can perform, based on results of the object classification, functions such as generating suitable control signals for controlling the various controls in the vehicle control system120in order to implement the planned trajectory of the vehicle110A. In some implementations, an autonomous vehicle control system (e.g., the vehicle control system120inFIG.1A) may include one or more processors (e.g., the processor122inFIG.1A, a processor of the processing system318inFIG.3A, a processor of the processing system358inFIG.3B). The one or more processors may be configured to cause a transmitter (e.g., the scanner308inFIG.3A) to transmit a transmit signal from a laser source (e.g., the laser302inFIG.3A). The one or more processors may be configured to cause a receiver (e.g., the scanner308inFIG.3A) to receive a return signal (e.g., the return311inFIG.3A) reflected by an object (e.g., the object310inFIG.3A). The transmitter and the receiver may be a single transceiver (e.g., the scanner308inFIG.3A). The one or more processors may be configured to cause one or more optics (e.g., the circulator optics306and PBS312inFIG.3A) to generate a first polarized signal of the return signal with a first polarization (e.g., the first polarized optical signal317inFIG.3A), and generate a second polarized signal of the return signal with a second polarization that is orthogonal to the first polarization (e.g., the second polarized optical signal319inFIG.3A). In some implementations, the one or more processors may be configured to cause a polarization beam splitter (PBS) of the one or more optics (e.g., the PBS312inFIG.3A) to polarize the return signal with the first polarization to generate the first polarized signal. The one or more processors may be configured to cause the PBS to polarize the return signal with the second polarization to generate the second polarized signal. The one or more processors may be configured to cause a first detector of the one or more optics (e.g., the first detector314inFIG.3A) to detect the first polarized signal. The one or more processors may be configured to cause a second detector of the one or more optics (e.g., the second detector316inFIG.3A) to detect the second polarized signal. In some implementations, the first and second detectors may be a single detector (e.g., the single detector354inFIG.3B). The one or more processors may be configured to cause a phase shifter (e.g., the shifter356inFIG.3B) to shift a phase of the second polarized signal. The one or more processors may be configured to cause the single detector to detect the first polarized signal, and to detect the phase-shifted second polarized signal (e.g., the first phase-shifted second polarized signal357inFIG.3B). The one or more processors may be configured to operate a vehicle based on a ratio of reflectivity (e.g., a depolarization ratio) between the first polarized signal and the second polarized signal. In some implementations, the first polarized signal may indicate a first image of the object with the first polarization, and the second polarized signal may indicate a second image of the object with the second polarization. The one or more processors may be configured to calculate the ratio of reflectivity by calculating a ratio between an average signal-to-noise ratio (SNR) value of the first image and an average SNR value of the second image (e.g., using Equation 1). The one or more processors may be configured to determine a type of the object based on the calculated ratio of reflectivity. For example, the perception subsystem154(seeFIG.1A) may classify the object based on (1) the obtained depolarization ratios and (2) relationship between key feature surfaces (e.g., asphalt, grass, rough concrete, gravel, metal poles, trees, telephone poles, utility poles, sign surfaces, lane markings, vehicle surfaces, license plate, skin, hair, etc.) and their depolarization ratios. The one or more processors may be configured to control a trajectory of the vehicle based on the type of the object. For example, the planning subsystem156(seeFIG.1A) may perform, based on results of the object classification, functions such as planning a trajectory for vehicle110A (seeFIG.1A). The control subsystem158(seeFIG.1A) may perform, based on results of the object classification, functions such as generating suitable control signals for controlling the various controls in the vehicle control system120(seeFIG.1A) in order to implement the planned trajectory of the vehicle110A. In some implementations, the one or more processors may be configured to determine the type of the object as one of asphalt road, lane markings, rough concrete road, grass, or gravel. The one or more processors may be configured to determine, based on the calculated ratio of reflectivity, that the object is an asphalt road. For example, the perception subsystem154may classify an object as either asphalt road, lane markings, rough concrete road, grass, or gravel (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is an asphalt road. In some implementations, the one or more processors may be configured to determine the type of the object as one of metal poles, trees, or utility poles. The one or more processors may be configured to determine, based on the calculated ratio of reflectivity, that the object is a metal pole. For example, the perception subsystem154may classify an object as either metal poles, trees, or utility poles (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is a metal pole (or metal poles). In some implementations, the one or more processors may be configured to determine the type of the object as one or more persons. The one or more processors may be configured to determine, based on the calculated ratio of reflectivity, respective regions of skin and clothes of the one or more persons. For example, the perception subsystem154may classify an object as one or more persons (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), determine whether a depolarization ratio of a first portion of the object is smaller than a first predetermined threshold, and determine whether a depolarization ratio of a second portion of the object is greater than a second predetermined threshold. In response to determining that the depolarization ratio of the first portion of the object is smaller than the first predetermined threshold, the perception subsystem154may determine that the first portion of the object is skin. In response to determining that the depolarization ratio of the second portion of the object is greater than the second predetermined threshold, the perception subsystem154may determine that the second portion of the object is hair or clothes. FIG.5is a flowchart illustrating an example methodology for controlling a trajectory of a vehicle based on a depolarization ratio according to some implementations. In this example methodology, the process begins at step510by determining a type of an object based on one or more images of the object obtained from a lidar system (e.g., the sensor136inFIG.1A, the lidar systems300,350inFIG.3AandFIG.3B) by one or more processors (e.g., the processor122inFIG.1A, a processor of the processing system318inFIG.3A, a processor of the processing system358inFIG.3B). At step520, in some implementations, the one or more processors may determine whether the object is an asphalt road, lane markings, a rough concrete road, grass, or gravel. In response to determining that the object is an asphalt road, lane markings, a rough concrete road, grass, or gravel, at step550, the one or more processors may determine, based on a calculated ratio of reflectivity, that the object is an asphalt road. For example, the perception subsystem154may classify an object as either asphalt road, lane markings, rough concrete road, grass, or gravel (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is an asphalt road. At step530, the one or more processors may determine whether the object is metal poles, trees, or utility poles. In response to determining that the object is metal poles, trees, or utility poles, at step550, the one or more processors may determine, based on a calculated ratio of reflectivity, that the object is a metal pole (or metal poles). For example, the perception subsystem154may classify an object as either metal poles, trees, or utility poles (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is a metal pole (or metal poles). At step540, the one or more processors may determine whether the object is one or more persons. In some implementations, at step550, the one or more processors may determine, based on the calculated ratio of reflectivity, respective regions of skin and clothes of the one or more persons. For example, the perception subsystem154may classify an object as one or more persons (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), determine whether a depolarization ratio of a first portion of the object is smaller than a first predetermined threshold, and determine whether a depolarization ratio of a second portion of the object is greater than a second predetermined threshold. In response to determining that the depolarization ratio of the first portion of the object is smaller than the first predetermined threshold, the perception subsystem154may determine that the first portion of the object is skin. In response to determining that the depolarization ratio of the second portion of the object is greater than the second predetermined threshold, the perception subsystem154may determine that the second portion of the object is hair or clothes. At step560, the one or more processors may control a trajectory of a vehicle based on the type of the object (e.g., type of asphalt road or metal poles as determined at step550) or based on the region of the object (e.g., regions or skin, hair, or clothes as determined at step550). For example, the planning subsystem156can perform, based on the determined type of the object (e.g., type of asphalt road or metal poles), functions such as planning a trajectory for vehicle110A over the static object within the environment. The planning subsystem156also can determine, based on the determined regions of the object (e.g., regions or skin, hair, or clothes of one or more people), a trajectory of the moving object (e.g., one or more people), and plan, based on the trajectory of the moving object, a trajectory for vehicle110A over the moving object within the environment. FIG.6is a flowchart illustrating an example methodology for operating a vehicle based on a depolarization ratio according to some implementations. In this example methodology, the process begins at step620by transmitting, from a laser source (e.g., the laser302inFIG.3A), a transmit signal (e.g., the transmit signal309inFIG.3A) and receiving a return signal (e.g., the return signal311inFIG.3A) reflected by an object (e.g., the object310inFIG.3A). In some implementations, the transmitting the transmit signal and the receiving the return signal are performed by a single transceiver (e.g., the scanner308inFIG.3A). At step640, in some implementations, a first polarized signal of the return signal with a first polarization (e.g., the first polarized optical signal317inFIG.3A) may be generated by one or more optics (e.g., the circulator optics306and PBS312inFIG.3A). In generating the first polarized signal, the return signal may be polarized with the first polarization by a polarization beam splitter (PBS) of the one or more optics (e.g., the PBS312inFIG.3A) to generate the first polarized signal. The first polarized signal may be detected by a first detector of the one or more optics (e.g., the first detector314inFIG.3A). At step660, in some implementations, a second polarized signal of the return signal with a second polarization that is orthogonal to the first polarization (e.g., the second polarized optical signal319inFIG.3A), may be generated by the one or more optics. In generating the second polarized signal, the return signal may be polarized with the second polarization by the PBS to generate the second polarized signal. The second polarized signal may be detected by a second detector of the one or more optics (e.g., the second detector316inFIG.3A). In some implementations, the first and second detectors may be a single detector (e.g., the single detector354inFIG.3B). In generating the first polarized signal, the first polarized signal may be detected by the single detector. In generating the second polarized signal, a phase of the second polarized signal may be shifted by a phase shifter (e.g., the shifter356inFIG.3B). The phase-shifted second polarized signal (e.g., the first phase-shifted second polarized signal357inFIG.3B) may be detected by the single detector (e.g., the single detector354inFIG.3B). At step680, a vehicle (e.g., the vehicle110A inFIG.1A) may be operated by one or more processors (e.g., the processor122inFIG.1A, a processor of the processing system318inFIG.3A, a processor of the processing system358inFIG.3B) based on a ratio of reflectivity (e.g., a depolarization ratio) between the first polarized signal and the second polarized signal. In some implementations, the first polarized signal may indicate a first image of the object with the first polarization, and the second polarized signal may indicate a second image of the object with the second polarization. In calculating the ratio of reflectivity, a ratio between an average signal-to-noise ratio (SNR) value of the first image and an average SNR value of the second image may be calculated (e.g., using Equation 1). In operating the vehicle based on the ratio of reflectivity, a type of the object may be determined based on the calculated ratio of reflectivity. For example, the perception subsystem154(seeFIG.1A) may classify the object based on (1) the obtained depolarization ratios and (2) relationship between key feature surfaces (e.g., asphalt, grass, rough concrete, gravel, metal poles, trees, telephone poles, utility poles, sign surfaces, lane markings, vehicle surfaces, license plate, skin, hair, etc.) and their depolarization ratios. In some implementations, a trajectory of the vehicle may be controlled based on the type of the object. For example, the planning subsystem156(seeFIG.1A) may perform, based on results of the object classification, functions such as planning a trajectory for vehicle110A (seeFIG.1A). In some implementations, in determining the type of the object, the type of the object may be determined as one of asphalt road, lane markings, rough concrete road, grass, or gravel. It may be determined, based on the calculated ratio of reflectivity, that the object is an asphalt road. For example, the perception subsystem154may classify an object as either asphalt road, lane markings, rough concrete road, grass, or gravel (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is an asphalt road. In some implementations, in determining the type of the object, the type of the object may be determined as one of metal poles, trees, or utility poles. It may be determined, based on the calculated ratio of reflectivity, that the object is a metal pole. For example, the perception subsystem154may classify an object as either metal poles, trees, or utility poles (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), and determine whether a depolarization ratio of the object is smaller than a predetermined threshold. In response to determining that the depolarization ratio of the object is smaller than the predetermined threshold, the perception subsystem154may determine that the object is a metal pole (or metal poles). In some implementations, in determining the type of the object, the type of the object may be determined as one or more persons. It may be determined, based on the calculated ratio of reflectivity, respective regions of skin and clothes of the one or more persons. For example, the perception subsystem154may classify an object as one or more persons (e.g., by applying machine learning techniques to images obtained from a lidar system without using depolarization ratios), determine whether a depolarization ratio of a first portion of the object is smaller than a first predetermined threshold, and determine whether a depolarization ratio of a second portion of the object is greater than a second predetermined threshold. In response to determining that the depolarization ratio of the first portion of the object is smaller than the first predetermined threshold, the perception subsystem154may determine that the first portion of the object is skin. In response to determining that the depolarization ratio of the second portion of the object is greater than the second predetermined threshold, the perception subsystem154may determine that the second portion of the object is hair or clothes. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout the previous description that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” It is understood that the specific order or hierarchy of blocks in the processes disclosed is an example of illustrative approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged while remaining within the scope of the previous description. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description of the disclosed implementations is provided to enable any person skilled in the art to make or use the disclosed subject matter. Various modifications to these implementations will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the previous description. Thus, the previous description is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The various examples illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given example are not necessarily limited to the associated example and may be used or combined with other examples that are shown and described. Further, the claims are not intended to be limited by any one example. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of various examples must be performed in the order presented. As will be appreciated by one of skill in the art the order of blocks in the foregoing examples may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular. The various illustrative logical blocks, modules, circuits, and algorithm blocks described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and blocks have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some blocks or methods may be performed by circuitry that is specific to a given function. In some exemplary examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The blocks of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product. The preceding description of the disclosed examples is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these examples will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to some examples without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. | 96,966 |
11858534 | DETAILED DESCRIPTION Embodiments of the present disclosure will be described with reference to the accompanying drawings. 1. First Embodiment 1-1. Driverless Transportation Service FIG.1is a conceptual diagram for explaining an outline of a driverless transportation service provided by an automated driving vehicle1according to a first embodiment. The automated driving vehicle1is capable of travelling autonomously without a driving operation by a driver. Examples of the automated driving vehicle1include a driverless taxi and a driverless bus. Such the automated driving vehicle1provides the driverless transportation service to a user2. More specifically, the automated driving vehicle1picks up a user2at a position specified by the user2or a predetermined position. Then, the automated driving vehicle1autonomously travels to a destination specified by the user2or a predetermined destination. When arriving at the destination, the automated driving vehicle1drops off the user2. Picking up the user2by the automated driving vehicle1is hereinafter referred to as “pick-up.” On the other hand, dropping off the user2by the automated driving vehicle1is hereinafter referred to as “drop-off.” Boarding and alighting of the user2are sometimes collectively called “PUDO (Pick-Up/Drop-Off).” In the present embodiment, a predetermined pick-up and drop-off area5provided in a facility3will be considered in particular. Examples of the facility3include a hotel, a building, a station, an airport, and the like. The pick-up and drop-off area5is a predetermined area (carriage porch) in which the automated driving vehicle1stops to pick up or drop off the user2. When a destination of the user2is the facility3, the automated driving vehicle1on which the user2rides stops in the pick-up and drop-off area5and drops off the user2. On the other hand, when a departure place of the user2is the facility3, the automated driving vehicle1stops in the pick-up and drop-off area5, picks up the user2, and departs for a destination. The pick-up and drop-off area5is one-way. That is, a direction of travel of vehicles (all vehicles including the automated driving vehicle1) in the pick-up and drop-off area5is predetermined. In terms of the direction of vehicle travel, “upstream” and “downstream” can be defined. That is, the direction of vehicle travel is a downstream direction XD (a first direction), and a direction opposite to the direction of vehicle travel is an upstream direction XU (a second direction). An approach road4provided upstream of the pick-up and drop-off area5is a road for guiding vehicles from a public road to the pick-up and drop-off area5. On the other hand, an exit road6provided downstream of the pick-up and drop-off area5is a road for guiding vehicles from the pick-up and drop-off area5to a public road. The vehicles move in the downstream direction XD in an order of the approach road4, the pick-up and drop-off area5, and the exit road6. An automated driving system10controls the automated driving vehicle1. Typically, the automated driving system10is installed on the automated driving vehicle1. Alternatively, at least a part of the automated driving system10may be disposed outside the automated driving vehicle1and remotely control the automated driving vehicle1. The automated driving system10controls the automated driving vehicle1so as to enter the pick-up and drop-off area5from the approach road4and stop in the pick-up and drop-off area5. When the automated driving vehicle1stops, the automated driving system10opens a door of the automated driving vehicle1. The user2gets off the automated driving vehicle1or gets on the automated driving vehicle1. Thereafter, the automated driving system10closes the door of the automated driving vehicle1. Then, the automated driving system10makes the automated driving vehicle1start moving and travel from the pick-up and drop-off area5to the exit road6. 1-2. Determination of Stop Space in Pick-Up and Drop-Off Area Next, a method of determining a stop space (a stop position) when making the automated driving vehicle1stop in the pick-up and drop-off area5will be described. A stop space is a vacant (free) space available for a single automated driving vehicle1to stop. It should be noted here that the stop space is a virtual one and does not need to be actually defined by a marking line. Moreover, the stop space is so set as to include a margin (inter-vehicle distance) necessary for making a stop. Therefore, the stop space is larger than a size of the automated driving vehicle1to some extent. It is desirable to appropriately determine the stop space from a viewpoint of the user2. FIG.2is a conceptual diagram for explaining a method of determining the stop space in the pick-up and drop-off area5. In the pick-up and drop-off area5, a “standard stop space S0” is set. The standard stop space S0is a default stop space with high convenience or a stop space specified by the user2. For example, the default standard stop space S0is set to a position facing an entrance of the facility3. Position information of the default standard stop space S0is registered in advance in map information or provided from the facility3to the automated driving system10. When the standard stop space S0is specified by the user2, position information of the specified standard stop space S0is provided from a user terminal of the user2to the automated driving system10. The automated driving system10has a function of recognizing a situation around the automated driving vehicle1by the use of a sensor installed on the automated driving vehicle1. When the standard stop space S0is available (vacant), making the automated driving vehicle1stop in the standard stop space S0is most preferable from a viewpoint of convenience for the user2or the request from the user2. Therefore, when the standard stop space S0is available for the automated driving vehicle1to stop, the automated driving system10sets the standard stop space S0as a target stop space ST. Then, the automated driving system10controls the automated driving vehicle1so as to travel toward the target stop space ST (i.e., the standard stop space ST) and stop in the target stop space ST. However, the standard stop space S0is not always available. For example, as illustrated inFIG.2, there is a case where another vehicle7is stopped in the standard stop space S0. In this case, it is not possible to make the automated driving vehicle1stop in the standard stop space S0. Therefore, the automated driving system10determines an alternative stop space different from the standard stop space S0. According to the present embodiment, the automated driving system10determines the alternative stop space not at random but according to a predetermined rule. In particular, the automated driving system10determines the alternative stop space in consideration of whether a purpose of the stopping this time is the drop-off or the pick-up. Hereinafter, each case of the drop-off and the pick-up will be described. 1-2-1. Drop-Off FIG.3is a conceptual diagram for explaining a method of determining the stop space in the case of the drop-off. The pick-up and drop-off area5includes an upstream area5U and a downstream area5D. The upstream area5U is the pick-up and drop-off area5existing in the upstream direction XU from the standard stop space S0. On the other hand, the downstream area5D is the pick-up and drop-off area5existing in the downstream direction XD from the standard stop space S0. When the standard stop space S0is not available at the time of the drop-off, the automated driving system10preferentially uses the “upstream area5U.” That is, the upstream area5U is a priority area. More specifically, the automated driving system10searches for an “upstream available space SU” that is an available (vacant) space in the upstream area5U and in which the automated driving vehicle1can be stopped. In order to secure the margin, the upstream available space SU larger than the size of the automated driving vehicle1to some extent is necessary. When the upstream available space SU is found, the automated driving system10sets the upstream available space SU as the target stop space ST. For example, the automated driving system10sets the upstream available space SU closest to the standard stop space S0as the target stop space ST. Being close to the standard stop space S0is preferable from a viewpoint of convenience for the user2or the request from the user2. Then, the automated driving system10controls the automated driving vehicle1so as to travel toward the target stop space ST (i.e., the upstream available space SU) and stop in the target stop space ST. After the automated driving vehicle1stops, the user2gets off the automated driving vehicle1. The automated driving vehicle1gets to the upstream area5U earlier than to the downstream area5D. Therefore, making the automated driving vehicle1stop not in the downstream area5D but in the upstream area5U enables the user2to more quickly get off the automated driving vehicle1. As a result, the user2becomes free more quickly and is able to use time efficiently. That is, convenience and time efficiency are improved from the viewpoint of the user2. After the user2gets off, the automated driving system10makes the automated driving vehicle1start moving. At this time, another vehicle7may still be stopped in the standard stop space S0existing ahead of the automated driving vehicle1. The other vehicle7may hinder the automated driving vehicle1from starting. However, since the user2has already got off, the user2does not feel stress even if the start of the automated driving vehicle1is somewhat delayed. 1-2-2. Pick-Up FIG.4is a conceptual diagram for explaining a method of determining the stop space in the case of the pick-up. When the standard stop space S0is not available at the time of the pick-up, the automated driving system10preferentially uses the “downstream area5D.” That is, the downstream area5D is the priority area. More specifically, the automated driving system10searches for a “downstream available space SD” that is an available (vacant) space in the downstream area5D and in which the automated driving vehicle1can be stopped. In order to secure the margin, the downstream available space SD larger than the size of the automated driving vehicle1to some extent is necessary. When the downstream available space SD is found, the automated driving system10sets the downstream available space SD as the target stop space ST. For example, the automated driving system10sets the downstream available space SD closest to the standard stop space S0as the target stop space ST. Being close to the standard stop space S0is preferable from the viewpoint of the convenience for the user2or the request from the user2. Then, the automated driving system10controls the automated driving vehicle1so as to travel toward the target stop space ST (i.e., the downstream available space SD) and stop in the target stop space ST. When the automated driving vehicle1stops, the user2gets on the automated driving vehicle1. The automated driving vehicle1may wait at the target stop space ST until the user2arrives. After the user2gets on the automated driving vehicle1, the automated driving system10makes the automated driving vehicle1start moving and travel toward a next destination. The automated driving vehicle1stopped in the downstream area5D is able to exit the pick-up and drop-off area5earlier than when the automated driving vehicle1is stopped in the upstream area5U. Therefore, making the automated driving vehicle1stop not in the upstream area5U but in the downstream area5D enables the automated driving vehicle1with the user2to more quickly depart for the destination. That is, the time efficiency is improved from the viewpoint of the user2. Moreover, when viewed from the automated driving vehicle1stopped in the downstream area5D, the standard stop space S0exists rearward. Therefore, another vehicle7stopped in the standard stop space S0does not hinder the automated driving vehicle1from starting. Therefore, the automated driving system10is able to easily make the automated driving vehicle1start moving. This is preferable from a viewpoint of vehicle travel control. In addition, the automated driving system10is able to make the automated driving vehicle1depart without delay. This contributes not only to improvement in the time efficiency but also to reduction in the user2's stress in the automated driving vehicle1. 1-2-3. Pick-Up Following Drop-Off After completion of the drop-off shown inFIG.3, the automated driving vehicle1may pick up another user2in the same pick-up and drop-off area5. In this case, after the completion of the drop-off, the automated driving system10resets the target stop space ST and performs the pick-up shown inFIG.4. Since the automated driving vehicle1is stopped in the upstream area5U at the time of the completion of the drop-off, the automated driving vehicle1is able to move to the downstream area5D without going out of the pick-up and drop-off area5. In other words, it is not necessary to go out of the pick-up and drop-off area5once, turn back the outside road, and then enter the pick-up and drop-off area5again. As described above, according to the present embodiment, it is possible to efficiently make a transition from the drop-off to the pick-up in the same pick-up and drop-off area5. 1-3. Configuration Example of Automated Driving System FIG.5is a block diagram showing a configuration example of the automated driving system10according to the present embodiment. The automated driving system10includes a sensor group20, a travel device30, a communication device40, and a control device (controller)100. The sensor group20is installed on the automated driving vehicle1. The sensor group20includes a position sensor21, a vehicle state sensor22, and a recognition sensor23. The position sensor21detects a position and an orientation of the automated driving vehicle1. As the position sensor21, a GPS (Global Positioning System) sensor is exemplified. The vehicle state sensor22detects a state of the automated driving vehicle1. Examples of the vehicle state sensor22include a vehicle speed sensor, a yaw rate sensor, a lateral acceleration sensor, a steering angle sensor, and the like. The recognition sensor23recognizes (detects) a situation around the automated driving vehicle1. Examples of the recognition sensor23include a camera, a radar, a LIDAR (Laser Imaging Detection and Ranging), and the like. The travel device30is installed on the automated driving vehicle1. The travel device30includes a steering device, a driving device, and a braking device. The steering device turns wheels of the automated driving vehicle1. For example, the steering device includes an electric power steering (EPS) device. The driving device is a power source that generates a driving force. Examples of the drive device include an engine, an electric motor, an in-wheel motor, and the like. The braking device generates a braking force. The communication device40communicates with the outside of the automated driving system10. For example, the communication device40communicates with a management server that manages the driverless transportation service. As another example, the communication device40communicates with a user terminal (for example, a smartphone, a tablet, or a personal computer) owned by the user2. The control device (controller)100controls the automated driving vehicle1. Typically, the control device100is a microcomputer installed on the automated driving vehicle1. The control device100is also called an electronic control unit (ECU). Alternatively, the control device100may be an information processing device outside the automated driving vehicle1. In this case, the control device100communicates with the automated driving vehicle1and remotely controls the automated driving vehicle1. The control device100includes a processor110and a memory device120. The processor110executes a variety of processing. The memory device120stores a variety of information. Examples of the memory device120include a volatile memory, a nonvolatile memory, and the like. The variety of processing by the processor110(the control device100) is achieved by the processor110executing a control program being a computer program. The control program is stored in the memory device120or recorded in a computer-readable recording medium. The processor110executes vehicle travel control that controls travel of the automated driving vehicle1. The vehicle travel control includes steering control, acceleration control, and deceleration control. The processor110executes the vehicle travel control by controlling the travel device30. More specifically, the processor110executes the steering control by controlling the steering device. The processor110executes the acceleration control by controlling the driving device. The control device100executes the deceleration control by controlling the braking device. Moreover, the processor110acquires driving environment information200indicating a driving environment for the automated driving vehicle1. The driving environment information200is acquired based on a result of detection by the sensor group20installed on the automated driving vehicle1. The acquired driving environment information200is stored in the memory device120. FIG.6is a block diagram showing an example of the driving environment information200. The driving environment information200includes vehicle position information210, vehicle state information220, surrounding situation information230, and map information240. The vehicle position information210is information indicating the position and the orientation of the automated driving vehicle1in the absolute coordinate system. The processor110acquires the vehicle position information210from a result of detection by the position sensor21. In addition, the processor110may acquire more accurate vehicle position information210by performing a well-known localization. The vehicle state information220is information indicating the state of the automated driving vehicle1. Examples of the state of the automated driving vehicle1include a vehicle speed, a yaw rate, a lateral acceleration, a steering angle, and the like. The processor110acquires the vehicle state information220from a result of detection by the vehicle state sensor22. The surrounding situation information230is information indicating a situation around the automated driving vehicle1. The surrounding situation information230includes information acquired by the recognition sensor23. For example, the surrounding situation information230includes image information indicating a situation around the automated driving vehicle1imaged by the camera. As another example, the surrounding situation information230includes measurement information measured by the radar or the LIDAR. Further, the surrounding situation information230includes object information regarding an object around the automated driving vehicle1. Examples of the object around the automated driving vehicle1include another vehicle, a pedestrian, a sign, a white line, a roadside structure (e.g., a guardrail, a curb), and the like. The object information indicates a relative position of the object with respect to the automated driving vehicle1. For example, analyzing the image information obtained by the camera makes it possible to identify the object and calculate the relative position of the object. It is also possible to identify the object and acquires the relative position of the object based on the radar measurement information. The map information240indicates a lane configuration, a road shape, and the like. The map information240includes a general navigation map. The processor110acquires the map information240of a necessary area from a map database. The map database may be stored in a predetermined storage device installed on the automated driving vehicle1, or may be stored in a management server outside the automated driving vehicle1. In the latter case, the processor110communicates with the management server via the communication device40to acquire the necessary map information240. The pick-up and drop-off area information250indicates a position and a range of the pick-up and drop-off area5provided in the facility3. For example, the pick-up and drop-off area information250is registered in advance in the map information240. As another example, the pick-up and drop-off area information250may be provided from the facility3when the automated driving vehicle1comes close to the facility3. In this case, the processor110communicates with the facility3via the communication device40to acquire the pick-up and drop-off area information250related to the facility3. It should be noted that the position and the range of the pick-up and drop-off area5are clearly defined on the map although the actual pick-up and drop-off area5may not be clear. Furthermore, the processor110acquires standard stop position information300(seeFIG.5). The standard stop position information300indicates the position of the standard stop space S0in the pick-up and drop-off area5. For example, the standard stop position information300is included in advance in the pick-up and drop-off area information250. In this case, the processor110acquires the standard stop position information300from the pick-up and drop-off area information250. As another example, the standard stop space S0may be specified by the user2. In this case, the user2specifies the standard stop space S0in the map by the use of the user terminal. The processor110communicates with the user terminal of the user2via the communication device40and acquires the standard stop position information300indicating the position of the specified standard stop space S0. The standard stop position information300is stored in the memory device120. It should be noted that using the vehicle position information210makes it possible to convert absolute positions of the pick-up and drop-off area5and the standard stop space S0into relative positions with respect to the automated driving vehicle1, and vice versa. In the following description, the position of the pick-up and drop-off area5or the standard stop space S0means an appropriate one of the absolute position and the relative position. Hereinafter, processing by the automated driving system10(the processor110) in the pick-up and drop-off area5according to the present embodiment will be described. 1-4. Processing in Pick-Up and Drop-Off Area FIG.7is a flow chart showing processing by the automated driving system10(the processor110) in the pick-up and drop-off area5according to the present embodiment. It should be noted that the above-described driving environment information200is updated at a predetermined cycle in another process flow. In addition, the standard stop position information300is already acquired. Moreover, whether a purpose of the stopping this time is the drop-off or the pick-up is registered in a travel plan of automated driving. In Step S100, the processor110determines whether or not the automated driving vehicle1has entered the pick-up and drop-off area5. The position of the automated driving vehicle1is obtained from the vehicle position information210. The position and the range of the pick-up and drop-off area5are obtained from the pick-up and drop-off area information250. Therefore, the processor110can determine whether or not the automated driving vehicle1has entered the pick-up and drop-off area5based on the vehicle position information210and the pick-up and drop-off area information250. When the automated driving vehicle1enters the pick-up and drop-off area5(Step S100; Yes), the processing proceeds to Step S200. As a modification example of Step S100, the processor110may determine whether or not the automated driving vehicle1has reached a position a certain distance before the pick-up and drop-off area5. When the automated driving vehicle1has reached the position a certain distance before the pick-up and drop-off area5(Step S100; Yes), the processing proceeds to Step S200. In Step S200, the processor110determines the target stop space ST in the pick-up and drop-off area5. The surrounding situation information230indicates the situation around the automated driving vehicle1. In particular, the surrounding situation information230includes the object information regarding the object (e.g., another vehicle7and the like) around the automated driving vehicle1. Therefore, the processor110can determine an available target stop space ST based on the surrounding situation information230. Details of this Step S200will be described later. In Step S300, the processor110performs the vehicle travel control such that the automated driving vehicle1travels toward the target stop space ST and stops in the target stop space ST. The vehicle travel control is performed based on the driving environment information200. Since a technique for controlling the vehicle to reach a target position is well known, a detailed description thereof will be omitted. Step S300is repeated until the automated driving vehicle1arrives at the target stop space ST. When the automated driving vehicle1arrives at the target stop space ST (Step S400; Yes), the process flow shown inFIG.7ends. The user2gets off the automated driving vehicle1or gets on the automated driving vehicle1. FIG.8is a flow chart showing Step S200(the determination of the target stop space ST). In Step S210, the processor110determines whether or not the standard stop space S0is available for the automated driving vehicle1to stop. The position of the standard stop space S0is obtained from the standard stop position information300. The surrounding situation information230includes the object information regarding the object (e.g., another vehicle7and the like) around the automated driving vehicle1. Therefore, the processor110can determine whether or not the standard stop space S0is available for the automated driving vehicle1to stop based on the surrounding situation information230and the standard stop position information300. When the standard stop space S0is available for the automated driving vehicle1to stop (Step S210; Yes), the processing proceeds to Step S270. In Step S270, the processor110sets the standard stop space S0as the target stop space ST. On the other hand, when the standard stop space S0is not available for the automated driving vehicle1to stop (Step S210; No), the processing proceeds to Step S220. In Step S220, the processor110determines, based on the travel plan of the automated driving, whether the purpose of the stopping this time is the drop-off or the pick-up. In the case of the drop-off (Step S220; Yes), the processing proceeds to Step S230. On the other hand, in the case of the pick-up (Step S220; No), the processing proceeds to Step S250. In Step S230, the processor110searches for the upstream available space SU in the upstream area5U. The upstream area5U, which is the pick-up and drop-off area5upstream of the standard stop space S0, can be recognized from the pick-up and drop-off area information250and the standard stop position information300. The upstream available space SU is an available space in which the automated driving vehicle1can be stopped. Information on the size of the automated driving vehicle1(not shown) is registered in the automated driving system10in advance. The object information regarding the object (e.g., another vehicle7and the like) around the automated driving vehicle1is obtained from the surrounding situation information230. The processor110can search for the upstream available space SU based on the surrounding situation information230. Then, the processor110sets the upstream available space SU as the target stop space ST (Step S280). The processor110may set the upstream available space SU closest to the standard stop space S0as the target stop space ST. Being close to the standard stop space S0is preferable from a viewpoint of the convenience for the user2or the request from the user2. In Step S250, the processor110searches for the downstream available space SD in the downstream area5D. The method of searching for the downstream available space SD is similar to that for the upstream available space SU. Then, the processor110sets the downstream available space SD as the target stop space ST (Step S290). The processor110may set the downstream available space SD closest to the standard stop space S0as the target stop space ST. Being close to the standard stop space S0is preferable from the viewpoint of the convenience for the user2or the request from the user2. 1-5. Effects As described above, according to the present embodiment, the automated driving system10controls the automated driving vehicle1so as to stop in the target stop space ST in the pick-up and drop-off area5. When the standard stop space S0is available, the automated driving system10sets the standard stop space S0as the target stop space ST. On the other hand, when the standard stop space S0is not available, the automated driving system10selects a priority area according to whether to drop off or pick up the user2, and sets the target stop space ST in the priority area. In the case of the drop-off (seeFIG.3), the upstream area5U is the priority area. In the upstream area5U, the upstream available space SU in which the automated driving vehicle1can be stopped is searched for. Then, the upstream available space SU is set as the target stop space ST. Making the automated driving vehicle1stop not in the downstream area5D but in the upstream area5U enables the user2to more quickly get off the automated driving vehicle1. As a result, the user2becomes free more quickly and is able to use time efficiently. That is, the convenience and the time efficiency are improved from the viewpoint of the user2. On the other hand, in the case of the pick-up (seeFIG.4), the downstream area5D is the priority area. In the downstream area5D, the downstream available space SD in which the automated driving vehicle1can be stopped is searched for. Then, the downstream available space SD is set as the target stop space ST. The automated driving vehicle1stopped in the downstream area5D is able to exit the pick-up and drop-off area5earlier than when the automated driving vehicle1is stopped in the upstream area5U. Therefore, making the automated driving vehicle1stop not in the upstream area5U but in the downstream area5D enables the automated driving vehicle1with the user2to more quickly depart for the destination. That is, the time efficiency is improved from the viewpoint of the user2. Moreover, when viewed from the automated driving vehicle1stopped in the downstream area5D, the standard stop space S0exists rearward. Therefore, another vehicle7stopped in the standard stop space S0does not hinder the automated driving vehicle1from starting. Therefore, the automated driving system10is able to easily make the automated driving vehicle1start moving. This is preferable from a viewpoint of the vehicle travel control. In addition, the automated driving system10is able to make the automated driving vehicle1depart without delay. This contributes not only to improvement in the time efficiency but also to reduction in the user2's stress in the automated driving vehicle1. After completion of the drop-off, the automated driving vehicle1may pick up another user2in the same pick-up and drop-off area5. Since the automated driving vehicle1is stopped in the upstream area5U at the time of the completion of the drop-off, the automated driving vehicle1is able to move to the downstream area5D without going out of the pick-up and drop-off area5. In other words, it is not necessary to go out of the pick-up and drop-off area5once, turn back the outside road, and then enter the pick-up and drop-off area5again. That is, it is possible to efficiently make a transition from the drop-off to the pick-up in the same pick-up and drop-off area5. 2. Second Embodiment A second embodiment proposes a more flexible response when the standard stop space S0is not available. An overlapping description with the first embodiment will be omitted as appropriate. 2-1. Drop-Off FIG.9is a conceptual diagram for explaining a method of determining the stop space in the case of the drop-off according to the second embodiment. A plurality of other vehicles7are continuously stopped in the upstream direction XU from the standard stop space S0. The upstream available space SU closest to the standard stop space S0is referred to as a “first upstream available space SU1” for the sake of convenience. When the first upstream available space SU1is too far from the standard stop space S0, the drop-off position also is too far from the entrance of the facility3or too far from the position specified by the user2. In such a case, it is not necessarily required to adhere to the first upstream available space SU1. It is also conceivable to use the downstream available space SD in the downstream area5D instead of the first upstream available space SU1which is too far from the standard stop space S0. In view of the above, according to the second embodiment, the target stop space ST is determined in consideration of a distance DU1between the standard stop space S0and the first upstream available space SU1, for the purpose of flexible response. More specifically, when the distance DU1is equal to or less than a threshold Dth, the upstream available space SU is set as the target stop space ST as in the case of the first embodiment. On the other hand, when the distance DU1exceeds the threshold Dth, the downstream available space SD instead of the first upstream available space SU1is set as the target stop space ST. That is, although the upstream available space SU is basically used as the target stop space ST, it is also possible to use the downstream available space SD as the target stop space ST only when the first upstream available space SU1is too far from the standard stop space S0. Such the method also is included in the concept of “preferentially” setting the upstream available space SU as the target stop space ST. 2-2. Pick-Up FIG.10is a conceptual diagram for explaining a method of determining the stop space in the case of the pick-up according to the second embodiment. A plurality of other vehicles7are continuously stopped in the downstream direction XD from the standard stop space S0. The downstream available space SD closest to the standard stop space S0is referred to as a “first downstream available space SD1” for the sake of convenience. As in the case of the drop-off described above, the target stop space ST is determined in consideration of a distance DD1between the standard stop space S0and the first downstream available space SD1, for the purpose of flexible response. More specifically, when the distance DD1is equal to or less than a threshold Dth, the downstream available space SD is set as the target stop space ST as in the case of the first embodiment. On the other hand, when the distance DD1exceeds the threshold Dth, the upstream available space SU instead of the first downstream available space SD1is set as the target stop space ST. That is, although the downstream available space SD is basically used as the target stop space ST, it is also possible to use the upstream available space SU as the target stop space ST only when the first downstream available space SD1is too far from the standard stop space S0. Such the method also is included in the concept of “preferentially” setting the downstream available space SD as the target stop space ST. 2-3. Process Flow FIG.11is a flow chart showing Step S200(the determination of the target stop space ST) according to the second embodiment. An overlapping description with the first embodiment described inFIG.8will be omitted as appropriate. In Step S230(drop off), the processor110searches for the upstream available space SU in the upstream area5U. The first upstream available space SU1is the upstream available space SU closest to the standard stop space S0. In subsequent Step S240, the processor110determines whether or not the distance DU1between the standard stop space S0and the first upstream available space SU1is equal to or less than the threshold Dth. When the distance DU1is equal to or less than the threshold Dth (Step S240; Yes), the processor110sets the upstream available space SU as the target stop space ST (Step S280). On the other hand, when the distance DU1exceeds the threshold Dth (Step S240; No), the processor110searches for the downstream available space SD in the downstream area5D, and sets the downstream available space SD as the target stop space ST (Step S290). In Step S250(pick-up), the processor110searches for the downstream available space SD in the downstream area5D. The first downstream available space SD1is the downstream available space SD closest to the standard stop space S0. In subsequent Step S260, the processor110determines whether or not the distance DD1between the standard stop space S0and the first downstream available space SD1is equal to or less than the threshold Dth. When the distance DD1is equal to or less than the threshold Dth (Step S260; Yes), the processor110sets the downstream available space SD as the target stop space ST (Step S290). On the other hand, when the distance DU1exceeds the threshold Dth (Step S260; No), the processor110searches for the upstream available space SU in the upstream area5U and sets the upstream available space SU as the target stop space ST (Step S280). 2-4. Effects As described above, according to the second embodiment, the target stop space ST is basically determined in the same manner as in the first embodiment. However, only when the first upstream available space SU1is too far from the standard stop space S0, the downstream available space SD is used as the target stop space ST instead. Similarly, only when the first downstream available space SD1is too far from the standard stop space S0, the upstream available space SU is used as the target stop space ST instead. Such the flexible response can prevent the target stop space ST from becoming too far from the standard stop space S0. As a result, the user2's dissatisfaction caused by the target stop space ST becoming too far from the standard stop space S0is reduced. | 38,567 |
11858535 | DETAILED DESCRIPTION While illustrative embodiments have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention. Hereinafter, various embodiments of this document are described in detail with reference to the accompanying drawings. FIG.1is a diagram illustrating an electronic device100according to various embodiments.FIG.2is a diagram for describing operational characteristics of a recursive network200according to various embodiments.FIG.3is a diagram for describing illustrative operational characteristics of the recursive network200. Referring toFIG.1, the electronic device100according to various embodiments may include at least one of a communication module110, a camera module120, a sensor module130, an input module140, an output module150, a driving module160, a memory170or a processor180. In an embodiment, at least any one of the components of the electronic device100may be omitted or one or more other components may be added to the electronic device100. The communication module110may support communication between the electronic device100and an external device (not illustrated). In this case, the communication module110may include at least any one of a wireless communication module or a wired communication module. According to one embodiment, the wireless communication module may support at least any one of a long-distance communication method or a short-distance communication method. The short-distance communication method may include at least any one of Bluetooth, WiFi direct, or infrared data association (IrDA), for example. In the wireless communication method, communication may be performed using the long-distance communication method over a network. The network may include at least any one of computer networks, such as a cellular network, the Internet, a local area network (LAN) or a wide area network (WAN). According to another embodiment, the wireless communication module may support communication with a global navigation satellite system (GNSS). For example, the GNSS may include a global positioning system (GPS). The camera module120may capture an external image of the electronic device100. In this case, the camera module120may be installed at a predetermined location of the electronic device100, and may capture an external image. Furthermore, the camera module120may generate image data for an external image of the electronic device100. For example, the camera module120may include at least any one of a lens, at least one image sensor, an image signal processor or a flash. The sensor module130may detect a state of the electronic device100or an external environment of the electronic device100. Furthermore, the sensor module130may generate sensing data for the state of the electronic device100or the external environment of the electronic device100. For example, the sensor module130may include at least any one of an acceleration sensor, a gyroscope sensor, an image sensor, a RADAR sensor, a LiDAR sensor or an ultrasonic sensor. The input module140may receive, from the outside of the electronic device100, an instruction or data to be used for at least any one of the components of the electronic device100. For example, the input module140may include at least any one of a microphone, a mouse or a keyboard. In an embodiment, the input module may include at least any one of a touch circuitry configured to detect a touch or a sensor circuitry configured to measure the intensity of a force generated by a touch. The output module150may provide information to the outside of the electronic device100. In this case, the output module150may include at least any one of a display module or an audio module. The display module may visually output information. For example, the display module may include at least any one of a display, a hologram device, or a projector. In an embodiment, the display module may be assembled with at least any one of the touch circuitry or sensor circuitry of the input module140, and may be implemented as a touch screen. The audio module may output information in a sound form. For example, the audio module may include at least any one of a speaker or a receiver. The driving module160may operate for an operation of the electronic device100. According to one embodiment, if the electronic device100is an autonomous vehicle, the driving module160may include various parts. According to another embodiment, if the electronic device100is mounted on a vehicle to implement an autonomous vehicle, the driving module160may be connected to various parts of the vehicle. Accordingly, the driving module160may operate while controlling at least any one of the parts. For example, the parts may include at least any one of an engine module, an acceleration module, a braking module, a steering module or a navigation module. The memory170may store at least any one of a program or data used by at least any one of the components of the electronic device100. For example, the memory170may include at least any one of a volatile memory or a non-volatile memory. The processor180may control the components of the electronic device100by executing a program of the memory170, and may perform data processing or operations. The processor180may detect prediction data (Yfinal) from input data X using a preset recursive network200. In this case, each of the input data X and the prediction data (Yfinal) may be time-series data. To this end, the processor180may include the recursive network200, such as that illustrated inFIG.2. In this case, the recursive network200may predict first prediction data (Yinitial) having a second time interval based on the input data X having a first time interval. Furthermore, the recursive network200may predict second prediction data (Yfinal) having a third time interval based on the input data and the first prediction data (Yinitial). Accordingly, the processor180may detect the second prediction data (Yfinal) as substantial prediction data (Yfinal). In this case, the first time interval may be the same as or different from the second time interval and the third time interval. The second time interval may be the same as or different from the third time interval. Accordingly, the processor180may control the driving of the electronic device100using the second prediction data (Yfinal). According to an embodiment, as illustrated inFIG.2(a), the recursive network200may be implemented in a form in which an external recurrence structure is coupled to an internal recurrence structure. The input data X may be input to the recursive network200. The recursive network200may detect the first prediction data (Yinitial) based on the input data X through the internal recurrence structure. Furthermore, the first prediction data (Yinitial) may be input to the recursive network200through the external recurrence structure. Accordingly, the recursive network200may detect the second prediction data (Yfinal) based on the input data X and the first prediction data (Yinitial). According to another embodiment, as illustrated inFIG.2(b), the recursive network200may be implemented in a form in which internal recurrence structures are coupled. The input data X may be input to a first internal recurrence structure of the recursive network200. The recursive network200may detect the first prediction data (Yinitial) based on the input data X through the first internal recurrence structure. Furthermore, the input data X and the first prediction data (Yinitial) may be input to a second internal recurrence structure of the recursive network200. The recursive network200may detect the second prediction data (Yfinal) based on the input data X and the first prediction data (Yinitial) through the second internal recurrence structure. For example, the electronic device100may be related to a vehicle300. For example, the electronic device100may be the vehicle300, that is, an autonomous vehicle. For another example, the electronic device100may be mounted on the vehicle300to implement an autonomous vehicle. In this case, surrounding objects may be surrounding vehicles301of the electronic device100. For example, the surrounding vehicles301may travel at their speeds, and at least any one of the surrounding vehicles301may be stopping. In such a case, as illustrated inFIG.3, the processor180may detect prediction data (Yfinal) for the surrounding vehicles301based on input data X for the surrounding vehicles301using the recursive network200. The input data X may include moving trajectories of the surrounding vehicles301. Furthermore, the prediction data (Yfinal) may include future trajectories of the surrounding vehicles301. For example, the processor180may predict future trajectories of the surrounding vehicles301for five seconds based on moving trajectories of the surrounding vehicles301for three seconds. The processor180may check moving trajectories of the surrounding vehicles301. The processor180may collect information on a surrounding situation of the electronic device100. In this case, the processor180may collect information on the surrounding situation of the electronic device100based on at least one of image data obtained through the camera module120or sensing data obtained through the sensor module130. For example, the information on a surrounding situation may include a longitudinal location, lateral location, longitudinal speed, and lateral speed of each surrounding vehicle301, a distance from the center of a lane, a lane number of each surrounding vehicle301on the basis of the vehicle300. Accordingly, the processor180may check moving trajectories of the surrounding vehicles301based on the information on a surrounding situation of the electronic device100. That is, the moving trajectories of the surrounding vehicles301may be detected as the input data X. In this case, the input data X may be time-series data. The processor180may predict primary future trajectories of the surrounding vehicles301based on moving trajectories of the surrounding vehicles301using the recursive network200. In this case, the primary future trajectories of the surrounding vehicles301may indicate an interaction between the surrounding vehicles301. Accordingly, the primary future trajectories of the surrounding vehicles301may be detected as first prediction data (Yinitial). In this case, the first prediction data (Yinitial) may be time-series data. For example, the processor180may predict primary future trajectories of the surrounding vehicles301for five seconds based on moving trajectories of the surrounding vehicles301for three seconds. The processor180may predict the final future trajectories of the surrounding vehicles301based on the moving trajectories of the surrounding vehicles301and the primary future trajectories of the surrounding vehicles301using the recursive network200. That is, the processor180may detect the final future trajectories of the surrounding vehicles301by updating the primary future trajectories of the surrounding vehicles301. Accordingly, the final future trajectories of the surrounding vehicles301may be detected as the second prediction data (Yfinal). In this case, the second prediction data (Yfinal) may be time-series data. For example, the processor180may predict the final future trajectories of the surrounding vehicles301for five seconds based on moving trajectories of the surrounding vehicles301for three seconds and primary future trajectories of the surrounding vehicles301for five seconds. Accordingly, the processor180may control the driving of the vehicle300based on the future trajectories of the surrounding vehicles301. At this time, the processor180may predict driving trajectories of the surrounding vehicles301for the vehicle300. Furthermore, the processor180may control the driving of the vehicle300based on a driving trajectory of the electronic device100. In this case, the processor180may control the driving module160based on the driving trajectory. FIG.4is a diagram illustrating an internal configuration of the recursive network200according to various embodiments.FIG.5is a diagram illustrating an internal configuration of the recursive network200.FIG.6is a diagram illustrating a detailed configuration of an encoder410inFIG.4.FIGS.7and8are diagrams illustrating detailed configurations of an attention module420and a decoder430inFIG.4. Referring toFIG.4, the recursive network200according to various embodiments may include at least one of at least one encoder410, at least one attention module420or at least one decoder430. In an embodiment, at least any one of the components of the recursive network200may be omitted, or one or more other components may be added to the recursive network200. For example, the electronic device100may be related to the vehicle300. For example, the electronic device100may be the vehicle300, that is, an autonomous vehicle. For another example, the electronic device100may be mounted on the vehicle300to implement an autonomous vehicle. In this case, the recursive network200may include a plurality of encoders410, a plurality of attention modules420, and a plurality of decoders430. In such a case, as illustrated inFIG.5, at least some of the encoders410, at least some of the attention modules420, and at least some of the decoders430may be activated in accordance with the number of surrounding vehicles301. In this case, any one of the encoders410, any one of the attention modules420, and any one of the decoders430may operate with respect to one surrounding vehicle301. The encoder410may detect a feature vector (hh,i, hf,i) based on at least one of input data (X; xi) or first prediction data (Yinitial; ŷi). In this case, the encoder410may extract hidden state information and memory cell state information based on at least one of the input data (X; xi) or the first prediction data (Yinitial; ŷi), and may detect the feature vector (hh,i, hf,i) based on the hidden state information and the memory cell state information. In this case, as illustrated inFIG.6, the encoder410may include at least one of a first encoder610, a second encoder620or a coupling module630. In some embodiments, the second encoder620may include the coupling module630. The first encoder610may detect a first feature vector (hh,i) based on the input data (X; xi) having a first time interval. For example, the first encoder610may include a plurality of recurrent neural networks (RNN). For example, each of the RNNs may be a long short-term memory (LSTM) network. Each of the RNNs may process input data at each piece of timing within the first time interval. The second encoder620may detect a second feature vector (hf,i) based on the first prediction data (Yinitial; ŷi) having a second time interval. For example, the second encoder620may include a plurality of RNNs. For example, each of the RNNs may be an LSTM network. Each of the RNNs may process the first prediction data (Yinitial; ŷi) at each piece of timing within the second time interval. The coupling module630may couple the first feature vector (hh,i) and the second feature vector (hf,i). In this case, the coupling module630may couple the first feature vector (hh,i) and the second feature vector (hf,i) using a fully connected (FC) layer. In this case, the coupling module630may generate a third feature vector (hi) by coupling the first feature vector (hh,i) and the second feature vector (hf,i). For example, the encoder410may detect each of the feature vectors (hh,i, hf,i, and hi) of all the surrounding vehicles310. The attention module420may calculate importance (αi) of the feature vector (hi) detected by the encoder410. The importance (αi) may be obtained by digitizing, as a relative value, a degree that a corresponding feature vector (hi) has an influence on generating a result model, that is, the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal). For example, as illustrated inFIGS.7and8, the attention module420may calculate the importance (αi) of each feature vector (hi) based on the feature vectors (hi) of all the surrounding vehicles310. In this case, the attention module420may calculate the importance (αi) of each feature vector (hi) between the feature vectors (hi) using at least any one of the FC layer or a softmax function. Furthermore, as illustrated inFIGS.7and8, the attention module420may multiply the feature vector (hi) and the importance (αi), and may transmit a multiplication result (si) to the decoder430. In this case, in a first internal recurrence, when a first feature vector (hi1) is received from the encoder410, the attention module420may calculate importance (αi1) of the first feature vector (hi1), and may multiply the first feature vector (hi1) and the importance (αi1). In an n-t internal recurrence, when a third feature vector (hin) is received from the encoder410, the attention module420may calculate importance (αin) of the third feature vector (hin), and may multiply the third feature vector (hin) and the importance (αin). The decoder430may output at least one of first prediction data (Yinitial; ŷi) or second prediction data (Yfinal; ŷi) using the feature vectors (hi), based on the importance (αi) calculated by the attention module420. In this case, the decoder430may output at least one of the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi), based on the hidden state information and memory cell state information extracted by the encoder410and the multiplication result (si) calculated by the attention module420. For example, as illustrated inFIG.7, the decoder430may detect at least one of the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi) based on the multiplication result (si) for all the surrounding vehicles310. In this case, in the first internal recurrence, when a multiplication result (si1) of a third feature vector (hi1) and importance (αi1) is received from the attention module420, the decoder430may detect the first prediction data (Yinitial; ŷi). In the n-t internal recurrence, when a multiplication result (sin) of a third feature vector (hin) and importance (αin) is received from the attention module420, the decoder430may detect the second prediction data (Yfinal; ŷi). For example, as illustrated inFIG.7, the decoder430may include at least one of a first decoder710or a second decoder720. The first decoder710may detect a lateral movement of each surrounding vehicle310. The second decoder720may detect a longitudinal movement of each surrounding vehicle310. Accordingly, the decoder430may generate the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi) by combining the lateral movement and longitudinal movement of each surrounding vehicle310. FIG.9is a diagram illustrating an operating method of the electronic device100according to various embodiments. Referring toFIG.9, at operation910, the electronic device100may detect input data X. In this case, the input data X may be time-series data. The processor180may detect the input data X having a first time interval. For example, the electronic device100may be related to the vehicle300. In such a case, the processor180may check moving trajectories of the surrounding vehicles301. The processor180may collect information on a surrounding situation of the electronic device100. In this case, the processor180may collect the information on a surrounding situation of the electronic device100, based on at least one of image data obtained through the camera module120or sensing data obtained through the sensor module130. Accordingly, the processor180may check moving the trajectories of the surrounding vehicles301based on the information on a surrounding situation of the electronic device100. That is, the moving trajectories of the surrounding vehicles301may be detected as the input data X. At operation920, the electronic device100may detect first prediction data (Yinitial) based on the input data X using the recursive network200. In this case, the first prediction data (Yinitial) may be time-series data. The processor180may detect the first prediction data (Yinitial) having a second time interval using the recursive network200. In this case, the second time interval may be the same as or different from the first time interval. For example, the electronic device100may be related to the vehicle300. In such a case, the processor180may predict primary future trajectories of the surrounding vehicles301based on the moving trajectories of the surrounding vehicles301using the recursive network200. That is, the primary future trajectories of the surrounding vehicles301may be detected as the first prediction data (Yinitial). At operation930, the electronic device100may detect second prediction data (Yfinal) having a third time interval based on the input data X and the first prediction data (Yinitial) using the recursive network200. In this case, the second prediction data (Yfinal) may be time-series data. The processor180may detect the second prediction data (Yfinal) having the third time interval using the recursive network200. In this case, the third time interval may be the same as or different from the second time interval. For example, the electronic device100may be related to the vehicle300. In such a case, the processor180may predict the final future trajectories of the surrounding vehicles301based on the moving trajectories of the surrounding vehicles301and the primary future trajectories of the surrounding vehicles301using the recursive network200. That is, the processor180may detect the final future trajectories of the surrounding vehicles301by updating the primary future trajectories of the surrounding vehicles301. That is, the final future trajectories of the surrounding vehicles301may be detected as the second prediction data (Yfinal). In an embodiment, the electronic device100may repeat operation930(n−1) times. That is, after performing operation920once, the electronic device100may repeat operation930(n−1) times. In this case, the processor180may update the first prediction data (Yinitial) with the second prediction data (Yfinal), and may detect the second prediction data (Yfinal) having the third time interval, based on the input data X and the updated first prediction data (Yinitial). The processor180may detect the second prediction data (Yfinal) having the third time interval using the recursive network200. Accordingly, the processor180may detect the final future trajectories of the surrounding vehicles301as the second prediction data (Yfinal) detected at an (n−1) position. Accordingly, the processor180may control the driving of the electronic device100by using the second prediction data (Yfinal). For example, the electronic device100may be related to the vehicle300. In such a case, the processor180may control the driving of the vehicle300by using the final future trajectories of the surrounding vehicles301. At this time, the processor180may predict a driving trajectory of the vehicle300. Furthermore, the processor180may control the driving of the vehicle300based on the driving trajectory of the electronic device100. In this case, the processor180may control the driving module160based on the driving trajectory. FIGS.10and11are diagrams for describing operating effects of the electronic device100according to various embodiments. Referring toFIGS.10and11, the accuracy of the final prediction data (Yfinal) detected in the electronic device100using the recursive network200is high. In order to verify the accuracy, the electronic device100was mounted on the vehicle300. The electronic device100predicts future trajectories of the surrounding vehicles301using the recursive network200. Furthermore, the electronic device100measured actual moving trajectories of the surrounding vehicles301. Accordingly, the electronic device100compared the future trajectories and actual moving trajectories of the surrounding vehicles301. As a result of the comparison, the future trajectories and actual moving trajectories of the surrounding vehicles301were almost identical. As illustrated inFIG.10, the electronic device100predicted a lane change situation for the surrounding vehicles301in addition to a common lane maintenance situation. Accordingly, the electronic device100may accurately predict a driving trajectory of the vehicle300based on the future trajectories of the surrounding vehicles301. In this case, the electronic device100may accurately predict a driving trajectory of the vehicle300in various driving environments, including a crossway, in addition to a driving environment, such as an expressway. According to various embodiments, the electronic device100may detect the final prediction data (Yfinal) based on the input data X in various fields using the recursive network200. That is, the electronic device100may detect the final prediction data (Yfinal) based on the input data X and the primary prediction data (Yinitial) according to a future interaction. For example, various fields may include a machine translation field, an image (image/video) caption field, and a voice recognition field in addition to the aforementioned field related to the vehicle300. FIG.12is a diagram for describing illustrative operational characteristics of the recursive network200. Referring toFIG.12, the electronic device100may perform machine translation using the recursive network200. In this case, the recursive network200may detect first prediction data (Yinitial), for example, a German sentence “Sie liebt dich.” (translated into “She loves you.” in English) based on input data X, for example, an English sentence “He loves you.” Furthermore, the recursive network200may detect second prediction data (Yfinal), for example, a German sentence “Er liebt dich” (translated into “He loves you.” in English) based on the input data X and the first prediction data (Yinitial). Accordingly, the accuracy of the final prediction data (Yfinal) detected in the electronic device100using the recursive network200is high. According to various embodiments, the electronic device100may detect the final prediction data (Yfinal) based on the primary prediction data (Yinitial), detected based on the input data X, in addition to the input data X using the recursive network200. Accordingly, the electronic device can improve the accuracy of the final prediction data. For example, if the electronic device100is related to the vehicle300, the electronic device100may predict the final future trajectories of the surrounding vehicles301based on moving trajectories of the surrounding vehicles301and primary future trajectories of the surrounding vehicles301predicted based on the moving trajectories. That is, the electronic device100can more accurately predict the final future trajectories of the surrounding vehicles301by considering an interaction between the surrounding vehicles301. Furthermore, the electronic device100can more accurately predict a driving trajectory of a vehicle based on the final future trajectories of the surrounding vehicles301. Accordingly, the electronic device100can secure the safety of the vehicle300. An operating method of the electronic device100according to various embodiments may include an operation of detecting input data X having a first time interval, an operation of detecting first prediction data (Yinitial) having a second time interval based on the input data X using the preset recursive network200, and an operation of detecting second prediction data (Yfinal) having a third time interval based on the input data X and the first prediction data (Yinitial) using the recursive network200. According to various embodiments, the recursive network200may include the encoder410configured to detect each of a plurality of feature vectors (hh,i, hf,I, hi) based on at least one of input data (X; xi) or first prediction data (Yinitial; ŷi), the attention module420configured to calculate each of pieces of importance (αi) of the feature vectors (hi) by calculating the importance (αi) of each feature vector (hi) between the feature vectors (hh,i, hf,I, hi), and the decoder430configured to output at least any one of the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi) using the feature vectors (hi) based on the pieces of importance (αi). According to various embodiments, each of the pieces of importance (αi) may indicate a degree that each of the feature vectors (hh,i, hf,I, hi) has an influence on generating the first prediction data or the second prediction data. According to various embodiments, the encoder410may include the first encoder610configured to detect first feature vectors (hh,i) based on input data (X; xi), the second encoder620configured to detect each of second feature vectors (hf,i) based on first prediction data (Yinitial; ŷi) when the first prediction data (Yinitial; ŷi) is received from the decoder430, and the coupling module630configured to couple each of the first feature vectors (hh,i) and each of the second feature vectors (hf,i) when all of the first feature vectors (hh,i) and the second feature vectors (hf,i) are detected. According to various embodiments, the first time interval may indicate the past time interval. The second time interval and the third time interval may indicate future time intervals. At least two of the first time interval, the second time interval or the third time interval may be the same as or different from each other. According to various embodiments, the encoder410may include a plurality of RNNs. According to various embodiments, the attention module420may transmit a multiplication result (si) of each of the feature vectors (hi) and each of the pieces of importance (αi) to the decoder430. According to various embodiments, the decoder430may include a plurality of RNNs. According to various embodiments, the electronic device100may be mounted on the vehicle300, or may be the vehicle300. The input data X may include a moving trajectory of the surrounding vehicle301. The first prediction data (Yinitial) may include the future trajectory of the surrounding vehicle301. According to various embodiments, the detecting of the second prediction data (Yfinal) may include updating the future trajectory based on the moving trajectory and future trajectory of the surrounding vehicle301using the recursive network200. According to various embodiments, the operating method of the electronic device100may further include an operation of controlling the driving of the vehicle300based on the second prediction data (Yfinal). The electronic device100according to various embodiments may include the memory170, and the processor180connected to the memory170and configured to execute at least one instruction stored in the memory170. According to various embodiments, the processor180may be configured to detect input data X having a first time interval, detect first prediction data (Yinitial) having a second time interval based on the input data X using the preset recursive network200, and detect second prediction data (Yfinal) having a third time interval based on the input data X and the first prediction data (Yinitial) using the recursive network200. According to various embodiments, the recursive network200may include the encoder410configured to detect each of a plurality of feature vectors (hh,i, hf,I, hi) based on at least one of input data (X; xi) or first prediction data (Yinitial; ŷi), the attention module420configured to calculate each of pieces of importance (αi) of the feature vectors (hi) by calculating the importance (αi) of each feature vector (hi) between the feature vectors (hh,i, hf,I, hi), and the decoder430configured to output at least any one of the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi) using the feature vectors (hi) based on the pieces of importance (αi). According to various embodiments, each of the pieces of importance (αi) may indicate a degree that each of the feature vectors (hh,i, hf,I, hi) has an influence on generating the first prediction data or the second prediction data. According to various embodiments, the encoder410may include the first encoder610configured to detect first feature vectors (hh,i) based on input data (X; xi), the second encoder620configured to detect each of second feature vectors (hf,i) based on first prediction data (Yinitial; ŷi) when the first prediction data (Yinitial; ŷi) is received from the decoder430, and the coupling module630configured to couple each of the first feature vectors (hh,i) and each of the second feature vectors (hf,i) when all of the first feature vectors (hh,i) and the second feature vectors (hf,i) are detected. According to various embodiments, the first time interval may indicate the past time interval. The second time interval and the third time interval may indicate future time intervals. At least two of the first time interval, the second time interval or the third time interval may be the same as or different from each other. According to various embodiments, the encoder410may include a plurality of RNNs. According to various embodiments, the attention module420may transmit a multiplication result (si) of each of the feature vectors (hi) and each of the pieces of importance (αi) to the decoder430. According to various embodiments, the decoder430may include a plurality of RNNs. According to various embodiments, the electronic device100may be mounted on the vehicle300, or may be the vehicle300. The input data X may include a moving trajectory of the surrounding vehicle301. The first prediction data (Yinitial) may include the future trajectory of the surrounding vehicle301. According to various embodiments, the processor180may be configured to update the future trajectory based on the moving trajectory and future trajectory of the surrounding vehicle301using the recursive network200and to output the updated future trajectory as the second prediction data (Yfinal). According to various embodiments, the processor180may be configured to control the driving of the vehicle300based on the second prediction data (Yfinal). Various embodiments of this document may be implemented as a computer program including one or more instructions stored in a storage medium (e.g., the memory170) readable by a computer device (e.g., the electronic device100). For example, a processor (e.g., the processor180) of the computer device may invoke at least one of the one or more instructions stored in the storage medium, and may execute the instruction. This enables the computer device to operate to perform at least one function based on the invoked at least one instruction. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The storage medium readable by the computer device may be provided in the form of a non-transitory storage medium. In this case, the term “non-transitory” merely means that the storage medium is a tangible device and does not include a signal (e.g., electromagnetic wave). The term does not distinguish between a case where data is semi-permanently stored in the storage medium and a case where data is temporally stored in the storage medium. A computer program according to various embodiments may execute an operation of detecting input data X having a first time interval, an operation of detecting first prediction data (Yinitial) having a second time interval based on the input data X using the preset recursive network200, and an operation of detecting second prediction data (Yfinal) having a third time interval based on the input data X and the first prediction data (Yinitial) using the recursive network200. According to various embodiments, the recursive network200may include the encoder410configured to detect each of a plurality of feature vectors (hh,i, hf,I, hi) based on at least one of input data (X; xi) or first prediction data (Yinitial; ŷi), the attention module420configured to calculate each of pieces of importance (αi) of the feature vectors (hi) by calculating the importance (αi) of each feature vector (hi) between the feature vectors (hh,i, hf,I, hi), and the decoder430configured to output at least any one of the first prediction data (Yinitial; ŷi) or the second prediction data (Yfinal; ŷi) using the feature vectors (hi) based on the pieces of importance (αi). According to various embodiments, each of the pieces of importance (αi) may indicate a degree that each of the feature vectors (hh,i, hf,I, hi) has an influence on generating the first prediction data or the second prediction data. The embodiments of this document and the terms used in the embodiments are not intended to limit the technology described in this document to a specific embodiment, but should be construed as including various changes, equivalents and/or alternatives of a corresponding embodiment. In the description of the drawings, similar reference numerals may be used in similar components. An expression of the singular number may include an expression of the plural number unless clearly defined otherwise in the context. In this document, an expression, such as “A or B”, “at least one of A and/or B”, “A, B or C” or “at least one of A, B and/or C”, may include all of possible combinations of listed items together. Expressions, such as “a first,” “a second,” “the first” and “the second”, may modify corresponding components regardless of their sequence or importance, and are used to only distinguish one component from the other component and do not limit corresponding components. When it is described that one (e.g., first) component is “(functionally or communicatively) connected to” or “coupled with” the other (e.g., second) component, the one component may be directly connected to the other component or may be connected to the other component through another component (e.g., third component). The “module” used in this document includes a unit configured with hardware, software or firmware, and may be interchangeably used with a term, such as logic, a logical block, a part or a circuit. The module may be an integrated part, a minimum unit to perform one or more functions, or a part thereof. For example, the module may be configured with an application-specific integrated circuit (ASIC). According to various embodiments, each (e.g., module or program) of the described components may include a single entity or a plurality of entities. According to various embodiments, one or more of the aforementioned components or operations may be omitted or one or more other components or operations may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into one component. In such a case, the integrated components may perform one or more functions of each of a plurality of components identically with or similar to that performed by a corresponding one of the plurality of components before the components are integrated. According to various embodiments, other components performed by a module, an operation or another program may be executed sequentially, in parallel, repeatedly or heuristically, or one or more of the operations may be executed in different order or may be omitted, or one or more other operations may be added. According to various embodiments, the electronic device can detect final prediction data based on primary prediction data, detected based on input data, in addition to the input data using the recursive network. Accordingly, the electronic device can improve the accuracy of the final prediction data. For example, if the electronic device is related to a vehicle, the electronic device can predict the final future trajectories of surrounding vehicles based on moving trajectories of the surrounding vehicles and primary future trajectories predicted based on the moving trajectories. That is, the electronic device can more accurately predict the final future trajectories of the surrounding vehicles by considering an interaction between the surrounding vehicles. Furthermore, the electronic device can more accurately predict a driving trajectory of the vehicle based on the final future trajectories of the surrounding vehicles. Accordingly, the electronic device can secure the safety of the vehicle. | 40,548 |
11858536 | DETAILED DESCRIPTION The following describes the technology of this disclosure within the context of an autonomous vehicle for example purposes only. As described herein, the technology described herein is not limited to an autonomous vehicle and can be implemented within other autonomous platforms and other computing systems. With reference now toFIGS.1-11, example embodiments of the present disclosure will be discussed in further detail.FIG.1depicts a block diagram of an example operational scenario100according to example implementations of the present disclosure. The operational scenario100includes an autonomous platform105and an environment110. The environment110can be external to the autonomous platform105. The autonomous platform105, for example, can operate within the environment110. The environment110can include an indoor environment (e.g., within one or more facilities, etc.) or an outdoor environment. An outdoor environment, for example, can include one or more areas in the outside world such as, for example, one or more rural areas (e.g., with one or more rural travel ways, etc.), one or more urban areas (e.g., with one or more city travel ways, highways, etc.), one or more suburban areas (e.g., with one or more suburban travel ways, etc.), etc. An indoor environment, for example, can include environments enclosed by a structure such as a building (e.g., a service depot, manufacturing facility, etc.). The environment110can include one or more dynamic object(s)130(e.g., simulated objects, real-world objects, etc.). The dynamic object(s)130can include any number of moveable objects such as, for example, one or more pedestrians, animals, vehicles, etc. The dynamic object(s)130can move within the environment according to one or more trajectories135. Although trajectories135are depicted as emanating from dynamic object(s)130, it is also to be understood that relative motion within the environment110can include one or more trajectories of the autonomous platform105itself. For instance, aspects of the present disclosure relate to the generation of trajectories via a joint prediction/planning framework, and those trajectories can, in various implementations, take into account trajectories135of the dynamic object(s)130and/or one or more trajectories of the autonomous platform105itself. The autonomous platform105can include one or more sensor(s)115,120. The one or more sensors115,120can be configured to generate or store data descriptive of the environment110(e.g., one or more static or dynamic objects therein, etc.). The sensor(s)115,120can include one or more LIDAR systems, one or more Radio Detection and Ranging (RADAR) systems, one or more cameras (e.g., visible spectrum cameras or infrared cameras, etc.), one or more sonar systems, one or more motion sensors, or other types of image capture devices or sensors. The sensor(s)115,120can include multiple sensors of different types. For instance, the sensor(s)115,120can include one or more first sensor(s)115and one or more second sensor(s)120. The first sensor(s)115can include a different type of sensor than the second sensor(s)120. By way of example, the first sensor(s)115can include one or more imaging device(s) (e.g., cameras, etc.), whereas the second sensor(s)120can include one or more depth measuring device(s) (e.g., LIDAR device, etc.). The autonomous platform105can include any type of platform configured to operate within the environment110. For example, the autonomous platform105can include one or more different type(s) of vehicle(s) configured to perceive and operate within the environment110. The vehicles, for example, can include one or more autonomous vehicle(s) such as, for example, one or more autonomous trucks. By way of example, the autonomous platform105can include an autonomous truck, including an autonomous tractor coupled to a cargo trailer. In addition, or alternatively, the autonomous platform105can include any other type of vehicle such as one or more aerial vehicles, ground-based vehicles, water-based vehicles, space-based vehicles, etc. FIG.2depicts an example system overview200of the autonomous platform as an autonomous vehicle according to example implementations of the present disclosure. More particularly,FIG.2illustrates a vehicle205including various systems and devices configured to control the operation of the vehicle205. For example, the vehicle205can include an onboard vehicle computing system210(e.g., located on or within the autonomous vehicle, etc.) that is configured to operate the vehicle205. For example, the vehicle computing system210can represent or be an autonomous vehicle control system configured to perform the operations and functions described herein for joint prediction/planning of trajectories. Generally, the vehicle computing system210can obtain sensor data255from a sensor system235(e.g., sensor(s)115,120ofFIG.1, etc.) onboard the vehicle205, attempt to comprehend the vehicle's surrounding environment by performing various processing techniques on the sensor data255, and generate an appropriate motion plan through the vehicle's surrounding environment (e.g., environment110ofFIG.1, etc.). The vehicle205incorporating the vehicle computing system210can be various types of vehicles. For instance, the vehicle205can be an autonomous vehicle. The vehicle205can be a ground-based autonomous vehicle (e.g., car, truck, bus, etc.). The vehicle205can be an air-based autonomous vehicle (e.g., airplane, helicopter, etc.). The vehicle205can be a lightweight electric vehicle (e.g., bicycle, scooter, etc.). The vehicle205can be another type of vehicle (e.g., watercraft, etc.). The vehicle205can drive, navigate, operate, etc. with minimal or no interaction from a human operator (e.g., driver, pilot, etc.). In some implementations, a human operator can be omitted from the vehicle205(or also omitted from remote control of the vehicle205). In some implementations, a human operator can be included in the vehicle205. The vehicle205can be configured to operate in a plurality of operating modes. The vehicle205can be configured to operate in a fully autonomous (e.g., self-driving, etc.) operating mode in which the vehicle205is controllable without user input (e.g., can drive and navigate with no input from a human operator present in the vehicle205or remote from the vehicle205, etc.). The vehicle205can operate in a semi-autonomous operating mode in which the vehicle205can operate with some input from a human operator present in the vehicle205(or a human operator that is remote from the vehicle205). The vehicle205can enter into a manual operating mode in which the vehicle205is fully controllable by a human operator (e.g., human driver, pilot, etc.) and can be prohibited or disabled (e.g., temporary, permanently, etc.) from performing autonomous navigation (e.g., autonomous driving, flying, etc.). The vehicle205can be configured to operate in other modes such as, for example, park or sleep modes (e.g., for use between tasks/actions such as waiting to provide a vehicle service, recharging, etc.). In some implementations, the vehicle205can implement vehicle operating assistance technology (e.g., collision mitigation system, power assist steering, etc.), for example, to help assist the human operator of the vehicle205(e.g., while in a manual mode, etc.). To help maintain and switch between operating modes, the vehicle computing system210can store data indicative of the operating modes of the vehicle205in a memory onboard the vehicle205. For example, the operating modes can be defined by an operating mode data structure (e.g., rule, list, table, etc.) that indicates one or more operating parameters for the vehicle205, while in the particular operating mode. For example, an operating mode data structure can indicate that the vehicle205is to autonomously plan its motion when in the fully autonomous operating mode. The vehicle computing system210can access the memory when implementing an operating mode. The operating mode of the vehicle205can be adjusted in a variety of manners. For example, the operating mode of the vehicle205can be selected remotely, off-board the vehicle205. For example, a remote computing system (e.g., of a vehicle provider, fleet manager, or service entity associated with the vehicle205, etc.) can communicate data to the vehicle205instructing the vehicle205to enter into, exit from, maintain, etc. an operating mode. By way of example, such data can instruct the vehicle205to enter into the fully autonomous operating mode. In some implementations, the operating mode of the vehicle205can be set onboard or near the vehicle205. For example, the vehicle computing system210can automatically determine when and where the vehicle205is to enter, change, maintain, etc. a particular operating mode (e.g., without user input, etc.). Additionally, or alternatively, the operating mode of the vehicle205can be manually selected through one or more interfaces located onboard the vehicle205(e.g., key switch, button, etc.) or associated with a computing device within a certain distance to the vehicle205(e.g., a tablet operated by authorized personnel located near the vehicle205and connected by wire or within a wireless communication range, etc.). In some implementations, the operating mode of the vehicle205can be adjusted by manipulating a series of interfaces in a particular order to cause the vehicle205to enter into a particular operating mode. The operations computing system290A can include multiple components for performing various operations and functions. For example, the operations computing system290A can be configured to monitor and communicate with the vehicle205or its users. This can include overseeing the vehicle205and/or coordinating a vehicle service provided by the vehicle205(e.g., cargo delivery service, passenger transport, etc.). To do so, the operations computing system290A can communicate with the one or more remote computing system(s)290B or the vehicle205through one or more communications network(s) including the communications network(s)220. The communications network(s)220can send or receive signals (e.g., electronic signals, etc.) or data (e.g., data from a computing device, etc.) and include any combination of various wired (e.g., twisted pair cable, etc.) or wireless communication mechanisms (e.g., cellular, wireless, satellite, microwave, and radio frequency, etc.) or any desired network topology (or topologies). For example, the communications network220can include a local area network (e.g., intranet, etc.), wide area network (e.g., the Internet, etc.), wireless LAN network (e.g., through Wi-Fi, etc.), cellular network, a SATCOM network, VHF network, a HF network, a WiMAX based network, or any other suitable communications network (or combination thereof) for transmitting data to or from the vehicle205. Each of the one or more remote computing system(s)290B or the operations computing system290A can include one or more processors and one or more memory devices. The one or more memory devices can be used to store instructions that when executed by the one or more processors of the one or more remote computing system(s)290B or operations computing system290A cause the one or more processors to perform operations or functions including operations or functions associated with the vehicle205including sending or receiving data or signals to or from the vehicle205, monitoring the state of the vehicle205, or controlling the vehicle205. The one or more remote computing system(s)290B can communicate (e.g., exchange data or signals, etc.) with one or more devices including the operations computing system290A and the vehicle205through the communications network(s)220. The one or more remote computing system(s)290B can include one or more computing devices such as, for example, one or more devices associated with a service entity (e.g., coordinating and managing a vehicle service), one or more operator devices associated with one or more vehicle providers (e.g., providing vehicles for use by the service entity, etc.), user devices associated with one or more vehicle passengers, developer devices associated with one or more vehicle developers (e.g., a laptop/tablet computer configured to access computer software of the vehicle computing system210, etc.), or other devices. One or more of the devices can receive input instructions from a user or exchange signals or data with an item or other computing device or computing system (e.g., the operations computing system290A, etc.). Further, the one or more remote computing system(s)290B can be used to determine or modify one or more states of the vehicle205including a location (e.g., a latitude and longitude, etc.), a velocity, an acceleration, a trajectory, a heading, or a path of the vehicle205based in part on signals or data exchanged with the vehicle205. In some implementations, the operations computing system290A can include the one or more remote computing system(s)290B. The vehicle computing system210can include one or more computing devices located onboard the autonomous vehicle205. For example, the computing device(s) can be located on or within the autonomous vehicle205. The computing device(s) can include various components for performing various operations and functions. For instance, the computing device(s) can include one or more processors and one or more tangible, non-transitory, computer readable media (e.g., memory devices, etc.). The one or more tangible, non-transitory, computer readable media can store instructions that when executed by the one or more processors cause the vehicle205(e.g., its computing system, one or more processors, etc.) to perform operations and functions, such as those described herein for collecting and processing sensor data, performing autonomy functions, predicting object trajectories and generating vehicle motion trajectories (e.g., using a joint prediction/planning framework according to example aspects of the present disclosure), controlling the vehicle205, communicating with other computing systems, etc. The vehicle205can include a communications system215configured to allow the vehicle computing system210(and its computing device(s)) to communicate with other computing devices. The communications system215can include any suitable components for interfacing with one or more network(s)220, including, for example, transmitters, receivers, ports, controllers, antennas, or other suitable components that can help facilitate communication. In some implementations, the communications system215can include a plurality of components (e.g., antennas, transmitters, or receivers, etc.) that allow it to implement and utilize multiple-input, multiple-output (MIMO) technology and communication techniques. The vehicle computing system210can use the communications system215to communicate with one or more computing devices that are remote from the vehicle205over the communication network(s)220(e.g., through one or more wireless signal connections, etc.). As shown inFIG.2, the vehicle computing system210can include the one or more sensors235, the autonomy computing system240, the vehicle interface245, the one or more vehicle control systems250, and other systems, as described herein. One or more of these systems can be configured to communicate with one another through one or more communication channels. The communication channel(s) can include one or more data buses (e.g., controller area network (CAN), etc.), on-board diagnostics connector (e.g., OBD-II, etc.), or a combination of wired or wireless communication links. The onboard systems can send or receive data, messages, signals, etc. amongst one another through the communication channel(s). In some implementations, the sensor(s)235can include one or more LIDAR sensor(s). The sensor(s)235can be configured to generate point data descriptive of a portion of a three hundred and sixty degree view of the surrounding environment. The point data can be three-dimensional LIDAR point cloud data. In some implementations, one or more sensors235for capturing depth information can be fixed to a rotational device in order to rotate the sensor(s) about an axis. The sensor(s)235can be rotated about the axis while capturing data in interval sector packets descriptive of different portions of a three hundred and sixty degree view of a surrounding environment of the autonomous vehicle205. In some implementations, one or more sensors235for capturing depth information can be solid state. In some implementations, the sensor(s)235can include at least two different types of sensor(s). For instance, the sensor(s)235can include at least one first sensor (e.g., the first sensor(s)115, etc.) and at least one second sensor (e.g., the second sensor(s)120, etc.). The at least one first sensor can be a different type of sensor than the at least one second sensor. For example, the at least one first sensor can include one or more image capturing device(s) (e.g., one or more cameras, RGB cameras, etc.). In addition, or alternatively, the at least one second sensor can include one or more depth capturing device(s) (e.g., LIDAR sensor, etc.). The at least two different types of sensor(s) can obtain multi-modal sensor data indicative of one or more static or dynamic objects within an environment of the autonomous vehicle205. The sensor(s)235can be configured to acquire sensor data255. The sensor(s)235can be external sensors configured to acquire external sensor data. This can include sensor data associated with the surrounding environment of the vehicle205. The surrounding environment of the vehicle205can include/be represented in the field of view of the sensor(s)235. For instance, the sensor(s)235can acquire image or other data of the environment outside of the vehicle205and within a range or field of view of one or more of the sensor(s)235. This can include different types of sensor data acquired by the sensor(s)235such as, for example, data from one or more LIDAR systems, one or more RADAR systems, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), one or more motion sensors, one or more audio sensors (e.g., microphones, etc.), or other types of imaging capture devices or sensors. The sensor data255can include image data (e.g., 2D camera data, video data, etc.), RADAR data, LIDAR data (e.g., 3D point cloud data, etc.), audio data, or other types of data. The one or more sensors can be located on various parts of the vehicle205including a front side, rear side, left side, right side, top, or bottom of the vehicle205. The vehicle205can also include other sensors configured to acquire data associated with the vehicle205itself. For example, the vehicle205can include inertial measurement unit(s), wheel odometry devices, or other sensors. The sensor data255can be indicative of one or more objects within the surrounding environment of the vehicle205. The object(s) can include, for example, vehicles, pedestrians, bicycles, or other objects. The object(s) can be located in front of, to the rear of, to the side of, above, below the vehicle205, etc. The sensor data255can be indicative of locations associated with the object(s) within the surrounding environment of the vehicle205at one or more times. The object(s) can be static objects (e.g., not in motion, etc.) or dynamic objects, such as other objects (e.g., in motion or likely to be in motion, etc.) in the vehicle's environment, such as people, animals, machines, vehicles, etc. The sensor data255can also be indicative of the static background of the environment. The sensor(s)235can provide the sensor data255to the autonomy computing system240, the remote computing device(s)290B, or the operations computing system290A. In addition to the sensor data255, the autonomy computing system240can obtain map data260. The map data260can provide detailed information about the surrounding environment of the vehicle205or the geographic area in which the vehicle205was, is, or will be located. For example, the map data260can provide information regarding: the identity and location of different roadways, road segments, buildings, or other items or objects (e.g., lampposts, crosswalks or curb, etc.); the location and directions of traffic lanes (e.g., the location and direction of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular roadway or other travel way or one or more boundary markings associated therewith, etc.); traffic control data (e.g., the location and instructions of signage, traffic lights, or other traffic control devices, etc.); obstruction information (e.g., temporary or permanent blockages, etc.); event data (e.g., road closures/traffic rule alterations due to parades, concerts, sporting events, etc.); nominal vehicle path data (e.g., indicate of an ideal vehicle path such as along the center of a certain lane, etc.); or any other map data that provides information that assists the vehicle computing system210in processing, analyzing, and perceiving its surrounding environment and its relationship thereto. In some implementations, the map data260can include high definition map data. In some implementations, the map data260can include sparse map data indicative of a limited number of environmental features (e.g., lane boundaries, etc.). In some implementations, the map data can be limited to geographic area(s) or operating domains in which the vehicle205(or autonomous vehicles generally) can travel (e.g., due to legal/regulatory constraints, autonomy capabilities, or other factors, etc.). The vehicle205can include a positioning system265. The positioning system265can determine a current position of the vehicle205. This can help the vehicle205localize itself within its environment. The positioning system265can be any device or circuitry for analyzing the position of the vehicle205. For example, the positioning system265can determine position by using one or more of inertial sensors (e.g., inertial measurement unit(s), etc.), a satellite positioning system, based on IP address, by using triangulation or proximity to network access points or other network components (e.g., cellular towers, WiFi access points, etc.) or other suitable techniques. The position of the vehicle205can be used by various systems of the vehicle computing system210or provided to a remote computing system. For example, the map data260can provide the vehicle205relative positions of the elements of a surrounding environment of the vehicle205. The vehicle205can identify its position within the surrounding environment (e.g., across six axes, etc.) based at least in part on the map data260. For example, the vehicle computing system210can process the sensor data255(e.g., LIDAR data, camera data, etc.) to match it to a map of the surrounding environment to get an understanding of the vehicle's position within that environment. Data indicative of the vehicle's position can be stored, communicated to, or otherwise obtained by the autonomy computing system240. The autonomy computing system240can perform various functions for autonomously operating the vehicle205. For example, the autonomy computing system240can perform the following functions: perception270A, prediction/forecasting270B, and motion planning270C. For example, the autonomy computing system240can obtain the sensor data255through the sensor(s)235, process the sensor data255(or other data) to perceive its surrounding environment, predict the motion of objects within the surrounding environment, and generate an appropriate motion plan through such surrounding environment. In some implementations, these autonomy functions can be performed by one or more sub-systems such as, for example, a perception system, a prediction/forecasting system, a motion planning system, or other systems that cooperate to perceive the surrounding environment of the vehicle205and determine a motion plan for controlling the motion of the vehicle205accordingly. In some implementations, one or more of the perception, prediction, or motion planning functions270A,270B,270C can be performed by (or combined into) the same system or through shared computing resources. In some implementations, one or more of these functions can be performed through different sub-systems. As further described herein, the autonomy computing system240can communicate with the one or more vehicle control systems250to operate the vehicle205according to the motion plan (e.g., through the vehicle interface245, etc.). For example, in some implementations, the autonomy computing system240can contain an interactive planning system270for joint planning/prediction according to example aspects of the present disclosure. Interactive planning system270can be included as an addition or complement to one or more traditional planning system(s). For instance, in some implementations, the interactive planning system270can implement prediction and motion planning functions270B and270C, while optionally one or more other planning systems can implement other prediction and motion planning functions (e.g., noninteractive functions). In some implementations, prediction and motion planning functions270B and270C can be implemented jointly to provide for interactive motion planning (e.g., motion planning for vehicle205that accounts for predicted interactions of other objects130with the motion plans, etc.). In some implementations, however, interactive planning system270can be configured to provide noninteractive planning (e.g., optionally in addition to interactive planning). In some implementations, interactive planning system270can be configured with variable interactivity, such that the output(s) of interactive planning system270can be adjusted to fully interactive planning, fully noninteractive planning, and one or more configurations therebetween (e.g., interactive planning aspects in a weighted combination with noninteractive planning aspects, etc.). The vehicle computing system210(e.g., the autonomy computing system240, etc.) can identify one or more objects that are within the surrounding environment of the vehicle205based at least in part on the sensor data255or the map data260. The objects perceived within the surrounding environment can be those within the field of view of the sensor(s)235or predicted to be occluded from the sensor(s)235. This can include object(s) not in motion or not predicted to move (static objects) or object(s) in motion or predicted to be in motion (dynamic objects/actors). The vehicle computing system210(e.g., performing the perception function270A, using a perception system, etc.) can process the sensor data255, the map data260, etc. to obtain perception data275A. The vehicle computing system210can generate perception data275A that is indicative of one or more states (e.g., current or past state(s), etc.) of one or more objects that are within a surrounding environment of the vehicle205. For example, the perception data275A for each object can describe (e.g., for a given time, time period, etc.) an estimate of the object's: current or past location (also referred to as position); current or past speed/velocity; current or past acceleration; current or past heading; current or past orientation; size/footprint (e.g., as represented by a bounding shape, object highlighting, etc.); class (e.g., pedestrian class vs. vehicle class vs. bicycle class, etc.), the uncertainties associated therewith, or other state information. The vehicle computing system210can utilize one or more algorithms or machine-learned model(s) that are configured to identify object(s) based at least in part on the sensor data255. This can include, for example, one or more neural networks trained to identify object(s) within the surrounding environment of the vehicle205and the state data associated therewith. The perception data275A can be utilized for the prediction function270B of the autonomy computing system240. The vehicle computing system210can be configured to predict a motion of the object(s) within the surrounding environment of the vehicle205. For instance, the vehicle computing system210can generate prediction data275B associated with such object(s). The prediction data275B can be indicative of one or more predicted future locations of each respective object. For example, the prediction function270B can determine a predicted motion trajectory along which a respective object is predicted to travel over time. A predicted motion trajectory can be indicative of a path that the object is predicted to traverse and an associated timing with which the object is predicted to travel along the path. The predicted path can include or be made up of a plurality of waypoints. In some implementations, the prediction data275B can be indicative of the speed or acceleration at which the respective object is predicted to travel along its associated predicted motion trajectory. The vehicle computing system210can utilize one or more algorithms and one or more machine-learned model(s) that are configured to predict the future motion of object(s) based at least in part on the sensor data255, the perception data275A, map data260, or other data. This can include, for example, one or more neural networks trained to predict the motion of the object(s) within the surrounding environment of the vehicle205based at least in part on the past or current state(s) of those objects as well as the environment in which the objects are located (e.g., the lane boundary in which it is travelling, etc.). The prediction data275B can be utilized for the motion planning function270C of the autonomy computing system240, such as in a joint planning/prediction technique implemented by interactive planning system270. The vehicle computing system210can determine a motion plan for the vehicle205based at least in part on the perception data275A, the prediction data275B, or other data. For example, the vehicle computing system210can generate motion planning data275C indicative of a motion plan. The motion plan can include vehicle actions (e.g., speed(s), acceleration(s), other actions, etc.) with respect to one or more of the objects within the surrounding environment of the vehicle205as well as the objects' predicted movements. The motion plan can include one or more vehicle motion trajectories that indicate a path for the vehicle205to follow. A vehicle motion trajectory can be of a certain length or time range. A vehicle motion trajectory can be defined by one or more waypoints (with associated coordinates). The waypoint(s) can be future location(s) for the vehicle205. The planned vehicle motion trajectories can indicate the path the vehicle205is to follow as it traverses a route from one location to another. Thus, the vehicle computing system210can take into account a route/route data when performing the motion planning function270C. The vehicle computing system210can implement (e.g., via interactive planning system270) an optimization algorithm, machine-learned model, etc. that considers cost data associated with a vehicle action as well as other objectives (e.g., cost functions, such as cost functions based at least in part on dynamic objects, speed limits, traffic lights, etc.), if any, to determine optimized variables that make up the motion plan. The vehicle computing system210can determine that the vehicle205can perform a certain action (e.g., pass an object, etc.) without increasing the potential risk to the vehicle205or violating any traffic laws (e.g., speed limits, lane boundaries, signage, etc.). For instance, the vehicle computing system210can evaluate the predicted motion trajectories of one or more objects during its cost data analysis to help determine an optimized vehicle trajectory through the surrounding environment. The motion planning function270C can generate cost data associated with such trajectories. In some implementations, one or more of the predicted motion trajectories or perceived objects may not ultimately change the motion of the vehicle205(e.g., due to an overriding factor, etc.). In some implementations, the motion plan can define the vehicle's motion such that the vehicle205avoids the object(s), reduces speed to give more leeway to one or more of the object(s), proceeds cautiously, performs a stopping action, passes an object, queues behind/in front of an object, etc. The vehicle computing system210can be configured to continuously update the vehicle's motion plan and corresponding planned vehicle motion trajectories. For example, in some implementations, the vehicle computing system210can generate new motion planning data275C (e.g., motion plan(s)) for the vehicle205(e.g., multiple times per second, etc.). Each new motion plan can describe a motion of the vehicle205over the next planning period (e.g., waypoint(s)/locations(s) over the next several seconds, etc.). Moreover, a motion plan can include a planned vehicle motion trajectory. The motion trajectory can be indicative of the future planned location(s), waypoint(s), heading, velocity, acceleration, etc. In some implementations, the vehicle computing system210can continuously operate to revise or otherwise generate a short-term motion plan based on the currently available data. Once the optimization planner has identified the optimal motion plan (or some other iterative break occurs), the optimal motion plan (and the planned motion trajectory) can be selected and executed by the vehicle205. The vehicle computing system210can cause the vehicle205to initiate a motion control in accordance with at least a portion of the motion planning data275C. A motion control can be an operation, action, etc. that is associated with controlling the motion of the vehicle205. For instance, the motion planning data275C can be provided to the vehicle control system(s)250of the vehicle205. The vehicle control system(s)250can be associated with a vehicle interface245that is configured to implement a motion plan. The vehicle interface245can serve as an interface/conduit between the autonomy computing system240and the vehicle control systems250of the vehicle205and any electrical/mechanical controllers associated therewith. The vehicle interface245can, for example, translate a motion plan into instructions for the appropriate vehicle control component (e.g., acceleration control, brake control, steering control, etc.). By way of example, the vehicle interface245can translate a determined motion plan into instructions to adjust the steering of the vehicle205by a certain number of degrees, apply a certain magnitude of braking force, increase/decrease speed, etc. The vehicle interface245can help facilitate the responsible vehicle control (e.g., braking control system, steering control system, acceleration control system, etc.) to execute the instructions and implement a motion plan (e.g., by sending control signal(s), making the translated plan available, etc.). This can allow the vehicle205to autonomously travel within the vehicle's surrounding environment. The vehicle computing system210can store other types of data. For example, an indication, record, or other data indicative of the state of the vehicle (e.g., its location, motion trajectory, health information, etc.), the state of one or more users (e.g., passengers, operators, etc.) of the vehicle, or the state of an environment including one or more objects (e.g., the physical dimensions or appearance of the one or more objects, locations, predicted motion, etc.) can be stored locally in one or more memory devices of the vehicle205. Additionally, the vehicle205can communicate data indicative of the state of the vehicle, the state of one or more passengers of the vehicle, or the state of an environment to a computing system that is remote from the vehicle205, which can store such information in one or more memories remote from the vehicle205. Moreover, the vehicle205can provide any of the data created or store onboard the vehicle205to another vehicle. The vehicle computing system210can include or otherwise be in communication with the one or more vehicle user devices280. For example, the vehicle computing system210can include, or otherwise be in communication with, one or more user devices with one or more display devices located onboard the vehicle205. A display device (e.g., screen of a tablet, laptop, smartphone, etc.) can be viewable by a user of the vehicle205that is located in the front of the vehicle205(e.g., driver's seat, front passenger seat, etc.). Additionally, or alternatively, a display device can be viewable by a user of the vehicle205that is located in the rear of the vehicle205(e.g., a back passenger seat, etc.). The user device(s) associated with the display devices can be any type of user device such as, for example, a tablet, mobile phone, laptop, etc. The vehicle user device(s)280can be configured to function as human-machine interfaces. For example, the vehicle user device(s)280can be configured to obtain user input, which can then be utilized by the vehicle computing system210or another computing system (e.g., a remote computing system, etc.). For example, a user (e.g., a passenger for transportation service, a vehicle operator, etc.) of the vehicle205can provide user input to adjust a destination location of the vehicle205. The vehicle computing system210or another computing system can update the destination location of the vehicle205and the route associated therewith to reflect the change indicated by the user input. As described herein, with reference to the remaining figures, the autonomy computing system240can utilize one or more machine-learned models to perform the perception270A, prediction270B, or motion planning270C functions. The machine-learned model(s) can be previously trained through one or more machine-learned techniques. The machine-learned models can be previously trained by the one or more remote computing system(s)290B, the operations computing system290A, or any other device (e.g., remote servers, training computing systems, etc.) remote from or onboard the vehicle205. For example, the one or more machine-learned models can be learned by a training computing system over training data stored in a training database. The training data can include, for example, sequential sensor data indicative of an environment (and objects/features within) at different time steps. In some implementations, the training data can include a plurality of environments previously recorded by the autonomous vehicle with one or more objects, static object(s) or dynamic object(s). To help improve the performance of an autonomous platform, such as an autonomous vehicle ofFIG.2, the technology of the present disclosure generally provides for implementing an interactive planning system270. In particular, example aspects of the present disclosure provide for a structured deep model (e.g., a structured machine-learned model) that uses a set of learnable costs across a set of future (e.g., possible) object trajectories. In some aspects, the set of learnable costs can induce a joint probability distribution over the set of future object trajectories (e.g., a distribution of probabilities for each of the set of future object trajectories, such as a set of probabilities for each of the set of future object trajectories conditioned on the vehicle motion trajectory of the autonomous vehicle). In this manner, for example, the interactive planning system270can jointly predict object motion (e.g., using the probability information) and plan vehicle motion (e.g., according to the costs). In some implementations, an interactive planning system270can implement interactive planning or noninteractive planning, as well as combinations thereof. For example,FIG.3Aillustrates an ego-actor, such as autonomous vehicle300, traversing a lane of a roadway. It might be desired for the autonomous vehicle300to change lanes to move into the other lane302(e.g., by following one or more vehicle motion trajectories304). However, the autonomous vehicle300is sharing the roadway with objects312,314, and316(e.g., other actors). And it can be predicted (e.g., by prediction function270B) that object312will continue moving forward in lane302along object trajectory320and maintain the same distance behind vehicle314, which may not leave sufficient room for autonomous vehicle300to maneuver into lane302while meeting other constraints (e.g., buffer space constraints, etc.). Based on this prediction, for example, the autonomous vehicle300can choose one of the motion trajectories304that does not interfere with the object312on the object trajectory320(e.g., as illustrated inFIG.3B). In some scenarios, the other objects312,314, and316, absent an external factor, might never move in such a way as to permit the autonomous vehicle300to ever obtain sufficient space (e.g., between objects312and314) to change lanes. For instance, object312might never have any interaction with any motion of autonomous vehicle300(e.g., never cooperatively adapt to the motion of the autonomous vehicle300). But in some scenarios, the object312might interact with a motion of the autonomous vehicle300in such a way as to open up space in the lane302. FIGS.4A and4Billustrate one scenario. For instance, in various implementations, the autonomous vehicle300can consider at least the illustrated vehicle motion trajectories402,404, and406as potential vehicle motion trajectories. Using an interactive planning system270, the autonomous vehicle300can predict a first probability that the other object312might traverse trajectory412if the autonomous vehicle300traverses trajectory402. Similarly, the autonomous vehicle300can predict a second probability that the other object312might traverse trajectory414if the autonomous vehicle300traverses trajectory404, and can predict a third probability that the other object312might traverse trajectory416if the autonomous vehicle300traverses trajectory406. Based at least in part on the predicted probabilities, the autonomous vehicle300can determine that traversing vehicle motion trajectory402will be associated with the object312traversing a trajectory412that permits sufficient space for the autonomous vehicle300to change lanes into lane302. In this manner, for instance, the autonomous vehicle300can account for the object312's interaction with the autonomous vehicle300's traversal of trajectory402. By accounting for the interactions of other objects with the potential motions of the autonomous vehicle300, the autonomous vehicle300can expand its set of possible trajectories to include trajectories that “nudge” or otherwise interact with other objects to achieve a goal (e.g., changing lanes, turning through traffic, merging, etc.). FIG.5depicts a diagram of an example system500for performing joint planning/prediction according to example aspects of the present disclosure. The example system500contains a trajectory planner510configured to accept inputs520and generate outputs530. The outputs530can include data descriptive of one or more vehicle motion trajectories (e.g., motion plan data275C). To generate the outputs530, the trajectory planner510can implement one or more costs to determine a preferred output (e.g., a vehicle motion trajectory meeting desired criteria, etc.). The costs can include, for example, autonomous vehicle (AV) cost(s)511, object cost(s)512, and interaction cost(s)514. The trajectory planner510can implement one or more prediction models516to generate one or more vehicle motion trajectories (e.g., for computation of the AV cost(s)511) and to generate one or more object trajectories and probabilities associated with the one or more object trajectories (e.g., for computation of the object cost(s)512and/or the interaction cost(s)514). The trajectory planner510can also implement goal(s)518, which can be used to determine a preferred output based on the output(s) capacity to meet one or more of the goals518. In some implementations, the trajectory planner510implements a structured machine-learned framework for joint prediction/planning using AV cost(s)511, object cost(s)512, and interaction cost(s)514. For example, a trajectory planner510can implement a structured machine-learned model for representing a value associated with a plurality of possible vehicle motion trajectories and object trajectories. For example, each of the autonomous vehicle300, object312, object314, and object316can be respectively associated with one or more trajectories. In some implementations, each of the autonomous vehicle and any objects in the environment of the autonomous vehicle is respectively associated with a plurality of trajectories (e.g., a distribution of trajectories, such as a continuous and/or discrete distribution). In various implementations, the respective plurality of trajectories can be structured according to a priori understandings of realistic trajectories (e.g., using knowledge about how the autonomous vehicle and/or various objects can or are expected to move through space to limit a search space of trajectories to physically possible and/or other nontrivial subsets of all trajectories). In some implementations, the respective plurality of trajectories can be constructed to include, for instance, a sampled set of realistic trajectories (e.g., output by a realistic trajectory sampler). For instance, the plurality of trajectories can include (optionally continuous) trajectories composed of lines, curves (e.g., circular curves), spirals (e.g., Euler spirals), etc. In this manner, for example, the plurality of trajectories can contain a distribution of more physically realistic and human-interpretable trajectories. For example, in some implementations, one or more of the prediction model(s)516can receive one or more inputs (e.g., context data, state data, etc., such as present and/or past state and/or context data measured and/or predicted for one or more objects) and generate a distribution of trajectories (e.g., object trajectories, vehicle motion trajectories, etc.). The prediction model(s)516can receive one or more inputs for each of the autonomous vehicle and a plurality of objects and output a tailored distribution of object trajectories for each of the plurality of objects and a tailored distribution of vehicle motion trajectories for the autonomous vehicle. In some implementations, the AV cost(s)511can include costs associated with any vehicle motion trajectory for the autonomous vehicle. In some implementations, a respective AV cost511can encode a score or other value for traversing a trajectory for the autonomous vehicle. In some implementations, AV cost(s)511can be computed for a distribution of trajectories for the autonomous vehicle (e.g., the cost(s) computed for each trajectory within the distribution, etc.). In some embodiments, the AV cost(s)511include a learnable cost based on context data (e.g., state data) for the autonomous vehicle. In some implementations, the object cost(s)512can include costs associated with any trajectory for any object in an environment. In some implementations, a respective object cost512can encode a score or other value for traversing a trajectory for a respective object. In some implementations, object cost(s)512can be computed for a distribution of trajectories for a given object (e.g., the cost(s) computed for each trajectory within the distribution, etc.). In some embodiments, the object cost(s)512for an object include a learnable cost based on context data (e.g., state data) for the object. In some implementations, the object cost(s)512can be or otherwise include an expected value. In some implementation, the object cost(s)512can be or otherwise include an expectation over a distribution of trajectories for a given object conditioned on the motion of the autonomous vehicle and/or on context data for the object and/or other objects. For instance, the expectation can correspond to a probability of an object traversing one or more object trajectories if the autonomous vehicle traverses a given potential vehicle motion trajectory. In some implementations, the expectation can correspond to a probability of an object traversing one or more object trajectories if the autonomous vehicle traverses a given potential vehicle motion trajectory and if other objects traverse a particular combination of object trajectories. In some implementations, the interaction cost(s)514can include costs associated with any set of two or more objects or with a pairing of the autonomous vehicle and a set of one or more objects. For instance, two or more objects, and/or a pairing of the autonomous vehicle and a set of one or more objects, can be associated with trajectories (e.g., object trajectories, vehicle motion trajectories, etc.) that have a potential interaction (e.g., overlap or proximity in time or space, such as contact or a near miss). In some embodiments, for instance, the interaction cost(s)514can encode a score or other value for the two or more objects, and/or a pairing of the autonomous vehicle and a set of one or more objects, respectively executing trajectories having the potential interaction. In some embodiments, the interaction cost(s)514includes a learnable cost, such as a learnable cost based on context and/or state data for the autonomous vehicle and/or the object(s). In some implementations, the interaction cost(s)514can be or otherwise include an expected value. In some implementation, the interaction cost(s)514can be or otherwise include an expectation over a distribution of interacting trajectories for a given set of objects conditioned on the motion of the autonomous vehicle. For instance, the expectation can correspond to a probability of a set of objects traversing one or more object trajectories if the autonomous vehicle traverses a given potential vehicle motion trajectory. In some implementations, the trajectory planner510implements one or more goals518. The goal(s)518can be a cost, such as a score or other value used to influence the determination of one or more trajectories (e.g., one or more vehicle motion trajectories for the autonomous vehicle). For instance, a goal can take on different forms depending on the scenario: in the case of a turn, a goalcan be a target position; in the case of a lane change,can be a polyline representing the centerline of the lane in continuous coordinates, etc. In some implementations, the score can include a distance (e.g., an l2distance) to a goal waypoint (e.g., a final waypoint). In some implementations (e.g., whenis a polyline), the score can include a projected distance (e.g., average projected distance) to. In some implementations, the trajectory planner510can determine one or more output(s)530by combining a plurality of costs (e.g., AV cost(s)511, object cost(s)512, interaction cost(s)514, goal(s)518, etc.). For instance, the trajectory planner510can linearly combine (e.g., add, subtract, etc.) a plurality of costs/scores to obtain a total cost (e.g., for determining one or more outputs530). In some implementations, the combination can be a weighted combination of cost(s), with weights corresponding to one or more probabilities (e.g., conditional probabilities). For example, in some implementations, a plurality of costs can be combined for one or more possible trajectories (e.g., potential vehicle motion trajectories for the autonomous vehicle, predicted object trajectories, etc.) to determine a total cost. In some implementations, a plurality of costs can be combined for each of a plurality of object trajectories for each of a plurality of objects (e.g., conditioned on each of a plurality of vehicle motion trajectories). In some implementations, a linear combination of the cost(s) (e.g., AV cost(s)511, object cost(s)512, interaction cost(s)514, etc.) can include variable weights applied to each. For example, a weight applied to the AV cost(s)511and/or object cost(s)512can emphasize influence of the individual-specific cost contribution on the combination. In some implementations, a weight applied to the interaction cost(s) can emphasize the influence of the AV-object and/or object-object interactions (e.g., contact, near misses, etc.) on the combined cost(s). In some implementations, a total cost can correspond to or otherwise include a machine-learned expectation of a system energy. For instance, a system energy can be constructed for the autonomous vehicle and any objects in an environment. The system energy can include individual component(s) (e.g., AV-specific, object-specific components) descriptive of trajectories for the autonomous vehicle and/or each of the plurality of objects as well as interaction energy component(s). The interaction energy component(s) can be descriptive of interactions (e.g., projected interactions, likely interactions, etc.) between the autonomous vehicle and one or more objects of the plurality of objects for respective interacting trajectories of the autonomous vehicle and the one or more objects. The interaction energy component(s) can be descriptive of interactions (e.g., projected interactions, likely interactions, etc.) between two or more objects of the plurality of objects for respective interacting trajectories of the two or more objects. The system energy can also include goal energies for the autonomous vehicle's goals. In some implementations, the trajectory planner510can provide for joint prediction and planning by determining an expected value of the system energy. The expectation of the system energy can provide at least in part for a probability distribution (e.g., a joint probability distribution) over the future trajectories of the plurality of objects and the vehicle motion trajectories for the autonomous vehicle. The expectation can be conditional, such as conditioned on a vehicle motion trajectory for the autonomous vehicle and/or on a set of contextual data. In some implementations, an individual component of the machine-learned expectation of a system energy can correspond to the object cost(s)512. In some implementations, an interaction energy component can correspond to the interaction cost(s)514. In some implementations, the output(s)530can include a vehicle motion trajectory for the autonomous vehicle, and the motion vehicle trajectory can be determined according to an objective based on a plurality of costs. The objective can be based on a joint probability distribution for a plurality of object trajectories given some context data and one or more (e.g., a plurality of) potential vehicle motion trajectories. For example, in some implementations, the trajectory planner510can generate K trajectories, such as can be expressed by, for instance,={y0, y1, . . . , yN} (e.g., 1 autonomous vehicle and N objects), where each yican be considered a discrete random variable that can take up one of K options (e.g., corresponding to one of the K trajectories). Accordingly, in some implementations, the distribution over the trajectories can be expressed as p(𝒴❘𝒳;w)=1Zexp(-C(𝒴❘𝒳;w))(1) where Z is the partition function and C expresses a system energy of the future trajectoriesconditioned on χ and parametrized by weights w (e.g., learnable weights w). In some implementations, contextual data can include past trajectories (e.g., for each object, for the autonomous vehicle, etc.), LiDAR sweeps, map data (e.g., high-definition map data, birds-eye view imagery, etc.), optionally in a voxelized tensor representation. The system energy can, for example, be expressed in some implementations with an individual energy component and an interaction energy component. C(𝒴❘𝒳;w)=∑i=0NCtraj(yi,𝒳;w)+∑i,jCinter(yi,yj)(2) where Ctrajcan represent an individual energy component based on the trajectories yiand Cintercan represent an interaction energy component descriptive of interactions (e.g., if any) arising from the traversal of trajectories yiand yiby the autonomous vehicle or the respective object. The summation (e.g., over 0 to N) can, in some implementations, represent a system energy which can be learned with, for example, parameters w (although not illustrated in Equation 2, Cintercan also contain a set of learnable parameters, which can be the same as or different than w). In some implementations, the components for the autonomous vehicle can correspond to a different set of parameters than w, or to its own subset of parameters in w. In some implementations, the individual energy component can receive a set of context data as an input. For instance, the individual energy component can be computed using one or more machine-learned models (e.g., neural networks, such as a convolutional or other neural network). In some implementations, the machine-learned models can generate one or more feature maps from the contextual data. For example, the contextual data can include rasterized data, such as a two-dimensional tensor grid (e.g., a two-dimensional tensor grid of overhead imagery of the environment), and one or more machine-learned models can generate a feature map (e.g., a spatial feature map). In some implementations, the spatial feature map generated from χ can be combined with the input trajectories yiand processed through one or more other machine-learned layers or models to output object-specific energy values (e.g., values for summation). In some implementations, the interaction component(s) can be computed using one or more machine-learned models. In some implementations, the interaction component(s) can be computed using one or more policies, criteria, or algorithms. For instance, an interaction component can be constructed to include a collision energy. A collision energy can include an energy value based on whether two input trajectories (e.g., yi, yj) might cause the respective objects and/or autonomous vehicle traversing the trajectories to come into contact. In some implementations, a collision energy can be a continuous function based on the likelihood and/or proximity to or avoidance of contact between the autonomous vehicle and an object and/or an object and another object. In some implementations, a collision energy can be a discrete function (e.g., a value of γ if contact, another value if not, such as 0, etc.). An interaction component can be constructed to include a buffer energy. A buffer energy can include an energy value based on whether the autonomous vehicle and/or objects respectively traversing two input trajectories (e.g., yi, yj) pass within a given buffer (e.g., within a given proximity threshold, etc.). In some implementations, a buffer energy can be a continuous or piecewise continuous function based on the likelihood and/or amount of a violation of the given buffer distance. For instance, a buffer energy can be expressed as the amount of distance in violation of the buffer (e.g., the distance past the set threshold, the square of the distance, etc.). In some implementations, a buffer energy can be a discrete function (e.g., a value of γ if a violation, another value if not, such as 0, etc.). The buffer distance can be evaluated, in some implementations, based on a distance from a bounding box of the autonomous vehicle and/or an object. In some implementations, the distance can be evaluated based on a distance from the center point of the autonomous vehicle or an object to the polygon of another object (e.g., minimal point-to-polygon distance). In some implementations, an expectation of the system energy can be used as an objective for determining a vehicle motion trajectory for the autonomous vehicle that minimizes the expected value of the system energy. For instance, in some implementations, an objective for the trajectory planner510can be expressed as y*0=argminy0f(|χ;w) (3) where y*0is a vehicle motion trajectory determined for the autonomous vehicle by the trajectory planner510and f(|χ;w)=r˜(r|y0,χ;w)[C(|χ;w)] (4) wherer˜(|y0, χ; w) describes the future distribution of the objects conditioned on the one or more potential trajectories y0for the autonomous vehicle (e.g.,rindicating the trajectories for the objects). The expectation of Equation 4 can be expressed in component form as Ctraj(yi,𝒳;w)+𝔼𝓎r∼p(𝓎r❘y0,𝒳;w)[∑i=1NCinter(y0,yi)+∑i=1NCtraj(yi,𝒳;w)+∑i=1,j=1N,NCinter(yi,yj)](5) where the autonomous vehicle individual energy component is expressed outside the expectation (e.g., because the trajectory planner510determines the vehicle motion trajectory—and thus the actual executed energy value—for the autonomous vehicle, but does not control the plurality of objects). The output(s)530can include a selected trajectory for traversal by the autonomous vehicle. In this manner, for instance, a planning objective implemented by trajectory planner510can jointly provide for interactive planning and prediction, by planning a trajectory for the autonomous vehicle (e.g., y*0) based on a system energy that accounts for the expected interactions of objects with the selected trajectory y*0. In some implementations, the selected trajectory can be further processed (e.g., by a vehicle interface245, etc.) for implementation by the autonomous vehicle as a motion plan. In some implementations, the selected trajectory can be accessed by trainer540for training one or more machine-learned components of the trajectory planner510. For instance, one or more parameters (e.g., parameters w) can be updated by the trainer540based at least in part on the selected trajectory of the output(s)530(e.g., in comparison with a reference, such as a ground truth reference). In some implementations, the trainer540can determine one or more losses for updating the trajectory planner510. The losses can include, for example, a comparison between one or more reference datasets descriptive of the motions through an environment of an autonomous vehicle and objects in an environment, and an output530of the trajectory planner510(e.g., an output selected trajectory for the autonomous vehicle, one or more probabilities of the object trajectories, etc.). In some implementations, the trainer540is configured to update the trajectory planner510(e.g., one or more machine-learned components of the planner510, such as the AV cost(s)511, object costs512, the interaction costs514, the energies associated therewith, etc.) to induce both a desired vehicle motion trajectory for the autonomous vehicle and desired probabilities for the object behaviors. Since the machine-learned expectation of the system energy can induce a probability distribution over the trajectories for the plurality of objects, in some implementations, a loss over a predictive distribution and the reference trajectories can provide for learning set of costs for joint interactive planning/prediction. In some implementations, a cross-entropy loss can be used. In some implementations, the loss can include an individual loss component and an interaction loss component. For instance, in some implementations, the loss can include for each component a value based on the probability based on the expectation (e.g., as in Equation 1) for a respective set of trajectories. For example, the loss can include a log loss log p(yi, χ; w). In some implementations, the loss is counted only for those trajectories yithat diverge from reference trajectories yg.t.. In some implementations, the loss is counted only for those trajectories yithat diverge from reference trajectories yg.t.by a specified amount. For instance, the loss can be determined over a subset of the plurality of trajectories for a respective object (or, e.g., for the autonomous vehicle), where the subset is configured to exclude one or more of the predicted trajectories for that respective object (or the autonomous vehicle) that are within a tolerance distance of a corresponding reference trajectory. In some implementations, for instance, trajectories within the tolerance distance can be considered a reference equivalent (e.g., close enough, such as still within the same lane or other course of travel as the reference, such as within an inconsequential variation from a reference path along a travel way, etc.). In this manner, for example, such reference equivalents might not be penalized by the trainer540. FIG.6Adepicts example system arrangements for some implementations of the trajectory planner510. Inputs520can include, for example, sensor data (e.g., sensor data255), map data (e.g., map data260), and historical data601(e.g., data descriptive of one or more past states of any or all of a plurality of objects in an environment). In some implementations, historical data601can include a trajectory history of the objects, including their bounding box widths/heights and headings (e.g., optionally transformed into coordinates in the autonomous vehicle's reference frame). The input data can form contextual data for an autonomous vehicle in an environment. Using inputs520, the trajectory planner510can implement an individual trajectory evaluator610, an interaction trajectory evaluator620, and one or more prediction models516to determine one or more objective(s)630. The individual trajectory evaluator610can obtain a plurality of potential trajectories611for the autonomous vehicle (e.g., a plurality of candidate vehicle motion trajectories for the autonomous vehicle) and/or for objects in an environment, etc. The plurality of trajectories611can be obtained, for example, from a trajectory generator616configured to generate one or more trajectories for the autonomous vehicle and/or each of a plurality of objects. The trajectory generator616can, for example, generate a continuous distribution of trajectories611, and/or sample trajectories611to obtain a discrete selection of individually continuous trajectories611(e.g., trajectories structured according to a priori understandings of realistic trajectories for the autonomous vehicle and/or respective objects). For example, in some implementations, the trajectory generator616can include a discrete trajectory sampler. In some implementations, the sampler estimates the initial speed and/or heading of an object given a provided past trajectory. From these values, the sampler can, in some implementations, sample from various trajectory modes corresponding to a priori understandings of how various objects are known or otherwise expected to travel through an environment. For instance, in some implementations, trajectory modes can include a straight line, a circular trajectory, or a spiral trajectory. In some implementations, each mode can correspond to a different probability. Within each mode, control parameters such as radius, acceleration can be uniformly sampled within a range to generate a sampled trajectory. The trajectories611can be input into a spatial model612(e.g., with the input data as contextual data). In some implementations, the spatial model612is a machine-learned model (e.g., a neural network, such as a convolutional neural network). In some implementations, the input data can be downsampled (e.g., rasterized input data can be decreased in resolution). In some implementations, the spatial model612includes a plurality of sub-blocks each having a plurality of convolutional layers (e.g., optionally increasing in count with subsequent sub-blocks, and optionally followed by normalizing and/or nonlinear activation layer(s)) and each having a plurality of output channels (e.g., optionally increasing in count with subsequent sub-blocks). In some implementations, the sub-blocks can include one or more pooling layers dispersed therebetween (e.g., max-pooling layers, etc.). In some implementations, an output of each of one or more sub-blocks is input into a final sub-block for generation of a feature map. An output of the spatial model612(e.g., an intermediate spatial encoding, such as a feature map) can be passed to a scorer613for producing a score. In some implementations, the score is an energy value (e.g., an individual energy component), and the scorer613generates the energy value for the plurality of trajectories611. In some implementations, the scorer can generate a score for each of the trajectories611(e.g., an energy value associated with a respective object's traversal of each of the object trajectories for that respective object and an energy value associated with the autonomous vehicle's traversal of each of the vehicle motion trajectories for the autonomous vehicle). In some implementations, the scorer613includes one or more learnable parameters. In some implementations, the scorer613contains a machine-learned model, such as a neural network (e.g., a multi-layer perceptron). In some implementations, the scorer613can obtain data from a feature map generated by the spatial model612using a region of interest. For example, a region of interest can be defined over a region of a map around the autonomous vehicle (e.g., centered around the autonomous vehicle, optionally rotated with the autonomous vehicle's heading). The region of interest can be processed by the scorer613(e.g., a neural network within the scorer613, such as a multilayer perceptron containing a plurality of convolutional layers) to obtain a region of interest encoding for each object of a plurality of objects. In some implementations, the scorer613can extract positional embeddings for each trajectory of a plurality of trajectories611(e.g., obtained as described above). In some implementations, the positional embeddings can be obtained at one or more timesteps by indexing the feature map (e.g., with interpolative methods, such as bilinear interpolation). In some implementations, the positional embeddings include a tensor containing a total horizon of the trajectory (e.g., for a plurality of past and future timesteps). In some implementations, the scorer613can encode a plurality of trajectory embeddings for each timestep. In some implementations, the trajectory embeddings can include spatial location information (e.g., position information, such as relative position information from previous timestep(s)). In some implementations, the trajectory embeddings can include decomposed displacements, including distance magnitude(s) in the coordinate frame heading directions, etc. The scorer613can input the various generated features and embeddings (e.g., any one or more of the region of interest encoding, the positional embeddings, or the trajectory embeddings) into a machine-learned model to output a score. The machine-learned model can include a neural network (e.g., a multilayer perceptron). FIG.6Billustrates an example data flow for obtaining AV cost(s)511and object cost(s)512. Contextual data (e.g., sensor data255and map data260) can be input to the spatial model612with one or more trajectories from the trajectories611. The spatial model612can output to the scorer613. For instance, the spatial model612can output intermediate features generated from the contextual data and the trajectory or trajectories for scoring by the scorer613. With reference again toFIG.6A, the interaction trajectory evaluator620can obtain a plurality of trajectories611(e.g., as described above) and determine one or more scores based on interactions between objects or interaction between the autonomous vehicle and the object(s) for the plurality of trajectories611. For instance, a collision score622and a buffer score623can be determined between two objects for a set of two trajectories respectively associated with the objects and/or determined between the autonomous vehicle and an object for a vehicle motion trajectory and an object trajectory respectively associated with the autonomous vehicle and the object. In some implementations, the collision score622and the buffer score623can include a collision energy and a buffer energy, respectively. FIG.6Billustrates example implementations of the collision score622and the buffer score623. As illustrated inFIG.6B, for instance, a collision score can include a value based on whether two paired object trajectories might cause the respective objects traversing the trajectories to come into contact (e.g., optionally determined at a threshold distance). A collision score can include a value based on whether a vehicle motion trajectory paired with an object trajectory might cause the autonomous vehicle and the respective object traversing the trajectories to come into contact (e.g., optionally determined at a threshold distance). InFIG.6B, for example, the collision score is plotted as a discrete function (e.g., a first value if in contact, another value if not, etc.). As illustrated inFIG.6B, for instance, a buffer score can include a value based on whether objects respectively traversing two paired object trajectories pass within a given buffer distance (e.g., within a given proximity threshold, 3 ft., 5 ft., 7 ft., etc.). In FIG.6B, for example, the buffer score623is plotted as a continuous, smooth function as a function of a distance. The buffer distance can be evaluated, in some implementations, based on a distance from a bounding box (e.g., bounding box699). With reference again toFIG.6A, the objective(s)630can include the AV cost(s)511, object cost(s)512, interaction cost(s)514, and goal(s)518(if any). The objective(s)630can be based at least in part on the score(s) from the individual trajectory evaluator610and the interaction trajectory evaluator620. The objective(s)630can also be based at least in part on the probability evaluator617. The probability evaluator617can provide one or more probabilities for each of the trajectories611. For example, the probability evaluator617can provide one or more conditional probabilities for a plurality of object trajectories of the trajectories611for the objects conditioned on each of a set of potential vehicle motion trajectories for the autonomous vehicle. For example, the probability evaluator617can provide for the expectation of a system energy conditioned on each of a set of potential vehicle motion trajectories for the autonomous vehicle in some implementations. In some implementations, the probability evaluator617can provide the marginal and pairwise marginal probabilities between all object trajectories over the set of be vehicle motion trajectories. In some implementations, the objective(s)630can be determined by combining the probabilities output by the probability evaluator617and the scores output by the individual trajectory evaluator610and the interaction trajectory evaluator620. For instance, the object cost(s)512can include a combination of (e.g., the product of) a marginal probability of an object trajectory given a potential vehicle motion trajectory of the autonomous vehicle. The interaction cost(s)514can include a combination of (e.g., the product of) a marginal probability of the pairing of two object trajectories given the potential vehicle motion trajectory of the autonomous vehicle. The interaction cost(s)514can include a combination of (e.g., the product of) a marginal probability of the pairing of a vehicle motion trajectory and an object trajectory given the potential vehicle motion trajectory of the autonomous vehicle. In this manner, for instance, example implementations according to aspects of the present disclosure can provide for interactive joint planning/prediction by determining a planning objective for an autonomous vehicle trajectory based on objects' interactions with a planned vehicle motion trajectory for the autonomous vehicle. For example, in some implementations, the objective(s)630can include an objective that can be expressed as in Equation 4. In some implementations, for a given potential vehicle motion trajectory for the autonomous vehicle, the expectation of Equation 4 can be expressed as f=Ctrajy0+∑𝓎rp𝓎r❘y0[∑i=1NCintery0,yi+∑i=1NCtrajyi+∑i=1,j=1N,NCinteryi,yj](6) where pr|y0is shorthand for p(r|y0, χ; w), and Ctrajyiis shorthand for Ctraj(yi, χ; w) (same for pairwise). In some implementations, the joint probabilities factorize over the individual energy components and the interaction energy components, so the objective can be expressed in terms of the marginal and pairwise marginal probabilities between all objects' trajectories as f=Ctrajy0+∑i,yipyi❘y0Cintery0,yi+∑i,yipyi❘y0Ctrajyi+∑i,j,yi,yjN,Npyi❘y0Cinteryi,yj(7) where pyi|y0represents the marginal probability of the object trajectory yiconditioned on the potential vehicle motion trajectory y0for the autonomous vehicle. In some implementations, the Cinteryi,yjterms can be omitted. In some implementations, the Cinteryi,yjterms can be included only for objects nearby the autonomous vehicle (e.g., within a given distance, such as a given radius defined in space or time). In some implementations, the marginal probabilities can be generated as tensors having dimensions according to the number of objects, the number of trajectories for each object and the number of potential vehicle motion trajectories (e.g., candidate trajectories). For example, with N−1 objects having K trajectories each, and including K potential trajectories for the autonomous vehicle, the marginal probabilities can, in some implementations, be expressed in a tensor having dimensions N by K by K. In some implementations, the probability evaluator617can include a machine-learned model (e.g., a neural network, such as a recurrent neural network). In some implementations, the probability evaluator617can implement a message-passing algorithm such as loopy belief propagation. Loopy belief propagation is an example of a differentiable iterative message passing procedure. For instance, the marginal probabilities can all be efficiently approximated by exploiting a message-passing algorithm such as loopy belief propagation. In some implementations, the probability evaluator617can provide for efficient batch evaluation. For instance, for every trajectory of every object, the system can evaluate the conditional marginal probability times the corresponding energy term. The output(s)530can include a selected trajectory631(e.g., a target trajectory, such as a target vehicle motion trajectory). The selected trajectory631can be determined using the objective(s)630. For instance, the selected trajectory631can correspond to a preferred value of the objective(s)630(e.g., a low value, such as the lowest value, which can include a local and/or global minimum). In some embodiments, the trainer540can use the selected trajectory631to computed a training loss. In some embodiments, however, the trainer540can use the probabilities632(e.g., determined by the probability evaluator617) to generate one or more training losses. For example, a cross-entropy loss used by the trainer540can be expressed as ℒ=∑iℒi+∑i,jℒi,j(8) where the individual loss641for the i-th object can be expressed in some implementations as ℒi=1K∑yi∉Δ(yi*)pg.t.(yi)logp(yi,𝒳;w)(9) and where the interaction loss642between the i-th and the j-th object can be expressed in some implementations as ℒyi,yj=1K2∑yi∉Δ(yi*),yj∉Δ(yj*)pg.t.(yi,yj)logp(yi,yj,𝒳;w)(10) where Δ(y*i) is used to indicate a set of predicted/obtained trajectories611that are within a tolerance distance of the reference (e.g., a reference y*ifor the i-th object). In some implementations, pg.t.can be used as an indicator that is one value (e.g., zero) unless the input (e.g., yi, or yi, yj) is equal to the reference (e.g., ground truth). In some implementations, the strength of interactivity of the trajectory planning can be modulated. For instance, with reference again toFIGS.4A and4B, it might be desired to modulate the assertiveness with which autonomous vehicle300injects itself into the target lane302to induce an interaction from the object312. In some implementations, the level of interactivity can be modulated by varying the size of the conditioning set in the prediction model of the interactive objective. For example, the interactivity can be modulated by constructing the conditional probability for each object trajectory yito be conditioned on a variable set of vehicle potential trajectories Syowith k, 1≤k≤K elements, which are the top-k potential vehicle motion trajectories closest to a given potential trajectory y0(e.g., by a distance, such as by an L2 distance). For example, in some implementations, the probability can be expressible as p(yi❘Sy0,𝒳;w)=1Z∑y_0∈Sy0p(yi,y_0,𝒳;w)(11) where Z is a normalizing constant, and the interactivity of the planning objective can be seen as decreasing as Syoincreases in members. For example, when Syocontains K members, it can intuitively be understood as adding up the conditional probabilities on all possible vehicle motion trajectories, which can effectively remove the conditionality of the overall probability on any one vehicle motion trajectory. As Syocontains some number of members less than K, it can provide for a modulated level of interactivity. In some implementations, decreasing the size of Syocan provide for increasing success rates for maneuvers (e.g., by providing for more interactive planning, enabling “nudging” behavior). In this manner, for example, maneuver success rates can be balanced with other constraints (e.g., contact constraints, buffer constraints, etc.) by manipulating the size of Syo. FIG.7depicts a flowchart of a method700for joint interactive prediction/planning (e.g., as discussed above with respect toFIGS.5and6) according to aspects of the present disclosure. One or more portion(s) of the method700can be implemented by a computing system that includes one or more computing devices such as, for example, the computing systems described with reference to the other figures (e.g., autonomous platform105, vehicle computing system210, operations computing system(s)290A, remote computing system(s)290B, system500, a system ofFIG.11, etc.). Each respective portion of the method700can be performed by any (or any combination) of one or more computing devices. Moreover, one or more portion(s) of the method700can be implemented on the hardware components of the device(s) described herein (e.g., as inFIGS.1,2,5,6A,6B,11, etc.), for example, to perform joint interactive prediction/planning.FIG.7depicts elements performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that the elements of any of the methods discussed herein can be adapted, rearranged, expanded, omitted, combined, or modified in various ways without deviating from the scope of the present disclosure.FIG.7is described with reference to elements/terms described with respect to other systems and figures for exemplary illustrated purposes and is not meant to be limiting. One or more portions of method700can be performed additionally, or alternatively, by other systems. At710, example method700includes obtaining contextual data (e.g., sensor data) associated with an environment of an autonomous vehicle (e.g., descriptive of the environment). In some implementations, the contextual data can include sensor data including a representation of an object within the environment. The object(s) can include any number of moveable or moving objects such as, for example, one or more pedestrians, animals, vehicles, etc. At720, example method700includes determining, using a machine-learned model framework, a plurality of scores respectively for each of a plurality of predicted object trajectories (for example, as discussed above with respect toFIGS.5and6). In some implementations, for instance, the score for one or more of the predicted object trajectories can include an energy value. At730, example method700includes determining, using the machine-learned model framework, a plurality of probabilities respectively for each of the plurality of predicted object trajectories (for example, as discussed above with respect toFIGS.5and6). In some implementations, for instance, the plurality of probabilities can be determined conditioned on the vehicle motion trajectory. For example, each respective probability of the plurality of probabilities can encode a likelihood (e.g., an estimated likelihood) that a respective object might traverse a respective predicted object trajectory. At740, example method700includes determining, using the machine-learned model framework, a vehicle motion trajectory for the autonomous vehicle. In some implementations of example method700, the vehicle motion trajectory can be determined based at least in part on the plurality of scores and the plurality of probabilities. For example, in some implementations, the plurality of scores (e.g., energies) and the plurality of probabilities can be combined (e.g., linearly combined). For instance, in some implementations, a total system score or energy can include a plurality of energies linearly combined according to their respective probabilities. In this manner, for example, a minimization can be performed (e.g., by comparing candidate vehicle motion trajectories) to obtain a desired system score or energy (e.g., optimized, such as a minimized system energy), such that a vehicle motion trajectory can be determined (e.g., a target vehicle motion trajectory) jointly with the plurality of predicted object trajectories and accounting for the interaction of the predicted object trajectories with the vehicle motion trajectories. For example, in some implementations, the example method700can further include determining, using the machine-learned model framework, a plurality of candidate vehicle motion trajectories for the autonomous vehicle (e.g., a plurality of potential vehicle motion trajectories), and selecting a target vehicle motion trajectory from among the plurality of candidate vehicle motion trajectories based on a minimization of the plurality of costs. In some implementations, each respective predicted object trajectory of the plurality of predicted object trajectories is associated with a probability of the respective predicted object trajectory conditioned on each of the plurality of candidate vehicle motion trajectories. In some implementations, the machine-learned model framework can include structured components. For instance, the plurality of predicted object trajectories can include trajectories sampled from a distribution (e.g., a discrete distribution) of potentially realistic trajectories for an object. For example, the sampled trajectories can, in some implementations, provide for an interpretable prediction component within the machine-learned model framework. FIG.8depicts a flowchart of a method800for joint interactive prediction/planning (e.g., as discussed above with respect toFIGS.5and6) according to aspects of the present disclosure. One or more portion(s) of the method800can be implemented by a computing system that includes one or more computing devices such as, for example, the computing systems described with reference to the other figures (e.g., autonomous platform105, vehicle computing system210, operations computing system(s)290A, remote computing system(s)290B, system500, a system ofFIG.11, etc.). Each respective portion of the method800can be performed by any (or any combination) of one or more computing devices. Moreover, one or more portion(s) of the method800can be implemented on the hardware components of the device(s) described herein (e.g., as inFIGS.1,2,5,6A,6B,11, etc.), for example, to perform joint interactive prediction/planning.FIG.8depicts elements performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that the elements of any of the methods discussed herein can be adapted, rearranged, expanded, omitted, combined, or modified in various ways without deviating from the scope of the present disclosure.FIG.8is described with reference to elements/terms described with respect to other systems and figures for exemplary illustrated purposes and is not meant to be limiting. One or more portions of method800can be performed additionally, or alternatively, by other systems. At810, example method800includes obtaining sensor data descriptive of an environment of an autonomous vehicle. In some implementations of example method800, the sensor data includes a representation of an object within the environment. The object(s) can include any number of moveable or moving objects such as, for example, one or more pedestrians, animals, vehicles, etc. At820, example method800includes determining, using a machine-learned model framework comprising one or more machine-learned models, a joint probability distribution over a plurality of predicted object trajectories of the object based on the sensor data. In some implementations of example method800, the plurality of predicted object trajectories are conditioned on a plurality of potential vehicle motion trajectories of the autonomous vehicle. In some implementations, the plurality of predicted object trajectories and/or the plurality of potential vehicle motion trajectories can be sampled from a distribution of potential trajectories (e.g., potentially realistic trajectories). At830, example method800includes determining, using the machine-learned model framework and from among the plurality of potential vehicle motion trajectories, a target vehicle motion trajectory for the autonomous vehicle based at least in part on the joint probability distribution and a plurality of costs. In some implementations of example method800, the plurality of costs include a cost associated with the target vehicle motion trajectory (e.g., a cost for the autonomous vehicle), a cost associated with a respective predicted object trajectory of the plurality of predicted object trajectories (e.g., a cost for the respective object corresponding thereto of the object(s) in the environment), and a cost associated with a potential interaction between the object and the autonomous vehicle for the respective predicted object trajectory and the target vehicle motion trajectory. Costs (i) and (ii) can, in some implementations, correspond to individual costs (e.g., AV cost(s)511, object cost(s)512) that can encode a score or other value for traversing a trajectory for a respective object or the autonomous vehicle. In some implementations, cost (ii) includes an expectation, such as an expectation over a probability distribution conditioned on the vehicle motion trajectory. In some implementations, cost (iii) includes interaction costs (e.g., interaction cost(s)514) that can encode a score or other value for the two or more objects for the respective predicted object trajectories having the potential interaction, or a score or other value for a pairing of the autonomous vehicle and an object, for the vehicle motion trajectory and the respective predicted object trajectory for the object. FIG.9depicts a flowchart of a method900for joint interactive prediction/planning (e.g., as discussed above with respect toFIGS.5and6) according to aspects of the present disclosure. One or more portion(s) of the method900can be implemented by a computing system that includes one or more computing devices such as, for example, the computing systems described with reference to the other figures (e.g., autonomous platform105, vehicle computing system210, operations computing system(s)290A, remote computing system(s)290B, system500, a system ofFIG.11, etc.). Each respective portion of the method900can be performed by any (or any combination) of one or more computing devices. Moreover, one or more portion(s) of the method900can be implemented on the hardware components of the device(s) described herein (e.g., as inFIGS.1,2,5,6A,6B,11, etc.), for example, to perform joint interactive prediction/planning.FIG.9depicts elements performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that the elements of any of the methods discussed herein can be adapted, rearranged, expanded, omitted, combined, or modified in various ways without deviating from the scope of the present disclosure.FIG.9is described with reference to elements/terms described with respect to other systems and figures for exemplary illustrated purposes and is not meant to be limiting. One or more portions of method900can be performed additionally, or alternatively, by other systems. As depicted inFIG.9, example method900includes at least one implementation of portion810from example method800. It is to be understood that any one or more portions of example method800(e.g.,820,830) can also be combined or otherwise incorporated into example method900. At920, example method900includes determining, using a machine-learned model framework comprising one or more machine-learned models, a plurality of predicted object trajectories of the object based on the sensor data. In some implementations, the plurality of predicted object trajectories can be sampled from a distribution of potential trajectories (e.g., potentially realistic trajectories). At930, example method900includes determining, using the machine-learned model framework, a target vehicle motion trajectory for the autonomous vehicle based on the plurality of predicted object trajectories and a predicted object interaction with the target vehicle motion trajectory. For example, in some implementations of example method900, the machine-learned model framework can be configured to determine the target vehicle motion trajectory based at least in part on a plurality of costs. In some implementations of example method900, the plurality of costs include a cost associated with the target vehicle motion trajectory (e.g., a cost for the autonomous vehicle), a cost associated with a respective predicted object trajectory of the plurality of predicted object trajectories (e.g., a cost for the respective object corresponding thereto), and a cost associated with a potential interaction between the object and the autonomous vehicle for the respective predicted object trajectory and the target vehicle motion trajectory. Costs (i) and (ii) can, in some implementations, correspond to individual costs (e.g., AV cost(s)511, object cost(s)512) that can encode a score or other value for traversing a trajectory for a respective object or the autonomous vehicle. In some implementations, cost (ii) includes an expectation, such as an expectation over a probability distribution conditioned on the vehicle motion trajectory. In some implementations, cost (iii) includes interaction costs (e.g., interaction cost(s)514) that can encode a score or other value for the two or more objects for the respective predicted object trajectories having the potential interaction, or a score or other value for a pairing of the autonomous vehicle and an object, for the vehicle motion trajectory and the respective predicted object trajectory for the object. At940, portion930of method900includes determining, using the machine-learned model framework, a joint probability distribution over the plurality of predicted object trajectories. In some implementations of the example method900, the joint probability distribution is indicative of probabilities for the plurality of predicted object trajectories. In some implementations, each respective predicted object trajectory of the plurality of predicted object trajectories is associated with a probability of the respective predicted object trajectory conditioned on the target vehicle motion trajectory. For example, in some implementations, the example method900can further include determining, using the machine-learned model framework, a plurality of candidate vehicle motion trajectories for the autonomous vehicle (e.g., a plurality of potential vehicle motion trajectories), and selecting the target vehicle motion trajectory from among the plurality of candidate vehicle motion trajectories based on a minimization of the plurality of costs. In some implementations, each respective predicted object trajectory of the plurality of predicted object trajectories is associated with a probability of the respective predicted object trajectory conditioned on each of the plurality of candidate vehicle motion trajectories. FIG.10depicts a flowchart of a method1000for training one or more example machine-learned models (e.g., as discussed above with respect toFIGS.5and6) according to aspects of the present disclosure. One or more portion(s) of the method1000can be implemented by a computing system that includes one or more computing devices such as, for example, the computing systems described with reference to the other figures (e.g., autonomous platform105, vehicle computing system210, operations computing system(s)290A, remote computing system(s)290B, system500, a system ofFIG.11, etc.). Each respective portion of the method1000can be performed by any (or any combination) of one or more computing devices. Moreover, one or more portion(s) of the method1000can be implemented on the hardware components of the device(s) described herein (e.g., as inFIGS.1,2,5,6A,6B,11, etc.), for example, to train machine-learned models.FIG.10depicts elements performed in a particular order for purposes of illustration and discussion. Those of ordinary skill in the art, using the disclosures provided herein, will understand that the elements of any of the methods discussed herein can be adapted, rearranged, expanded, omitted, combined, or modified in various ways without deviating from the scope of the present disclosure.FIG.10is described with reference to elements/terms described with respect to other systems and figures for exemplary illustrated purposes and is not meant to be limiting. One or more portions of method1000can be performed additionally, or alternatively, by other systems. At1010, the method1000can include generating training data for training a machine-learned trajectory planner model (e.g., a model containing or otherwise implementing one or more portions of example system500, such as a trajectory planner510). For example, a computing system (e.g., autonomous platform105, vehicle computing system210, operations computing system(s)290A, remote computing system(s)290B, system500, system ofFIG.10, etc.) can generate the training data for training the machine-learned trajectory planner model. The training data can include a plurality of training instances, such as pre-recorded inputs (e.g., perception data, map data, etc.) corresponding to ground truth trajectories (e.g., recorded trajectories for one or more moving objects and/or the autonomous vehicle). The training data can be collected using one or more autonomous platforms (e.g., autonomous platform105) or the sensors thereof as the autonomous platform is within its environment. By way of example, the training data can be collected using one or more autonomous vehicle(s) (e.g., autonomous platform105, autonomous vehicle205, etc.) or sensors thereof as the vehicle(s) operates along one or more travel ways. The training data can include a plurality of training sequences divided between multiple datasets (e.g., a training dataset, a validation dataset, or testing dataset). Each training sequence can include a plurality of map data, context information, pre-recorded perception data, etc. In some implementations, each sequence can include LiDAR point clouds (e.g., collected using LiDAR sensors of an autonomous platform) or high definition map information (e.g., structured lane topology data). For instance, in some implementations, a plurality of images can be scaled for training and evaluation. At1020, the method1000can include selecting a training instance based at least in part on the training data. For example, a computing system can select the training instance based at least in part on the training data. At1030, the method1000can include inputting the training instance into the machine-learned trajectory planner model. For example, a computing system can input the training instance into the machine-learned trajectory planner model. At1040, the method1000can include generating loss metric(s) for the machine-learned trajectory planner model based on output(s) of at least a portion of the machine-learned trajectory planner model in response to inputting the training instance (e.g., at1030). For example, a computing system can generate the loss metric(s) for the machine-learned trajectory planner model based on the output(s) of at least the portion of the machine-learned trajectory planner model in response to the training instance. The loss metric(s), for example, can include a loss as described herein based at least in part on a probability determined for one or more object trajectories. For instance, in some implementations, the loss metric(s) can include a cross-entropy loss. In some implementations, the loss can be counted only for those trajectories that diverge from the ground truth trajectories. In some implementations, the loss can be counted only for those trajectories that diverge from reference trajectories by a specified amount (e.g., a tolerance). For instance, the loss can be determined over a subset of the plurality of trajectories for a respective object, where the subset is configured to exclude one or more of the predicted trajectories for that respective object that are within a tolerance distance of a corresponding reference trajectory for that respective object. For instance, the loss can be determined over a subset of the plurality of vehicle motion trajectories for the autonomous vehicle, where the subset is configured to exclude one or more of the predicted vehicle motion trajectories that are within a tolerance distance of a corresponding reference vehicle motion trajectory. In some implementations, for instance, trajectories within the tolerance distance can be considered a reference equivalent (e.g., close enough, such as still within the same lane or other course of travel as the reference, such as within an inconsequential variation from a reference path along a travel way, etc.). At1050, the method1000can include modifying at least the portion of the machine-learned trajectory planner model based at least in part on at least one of the loss metric(s). For example, a computing system can modify at least the portion of the machine-learned trajectory planner model based, at least in part, on at least one of the loss metric(s). In some implementations, the machine-learned model framework can be trained in an end-to-end manner. For example, in some implementations, the machine-learned model framework can be fully differentiable. FIG.11is a block diagram of an example computing system1100, according to some embodiments of the present disclosure. The example system1100includes a computing system1200and a machine-learning computing system1300that are communicatively coupled over one or more networks1400. In some implementations, the computing system1200can perform one or more observation tasks such as, for example, by obtaining sensor data (e.g., two-dimensional, three-dimensional, etc.). In some implementations, the computing system1200can be included in an autonomous platform. For example, the computing system1200can be on-board an autonomous vehicle. In other implementations, the computing system1200is not located on-board an autonomous platform. The computing system1200can include one or more distinct physical computing devices1205. The computing system1200(or one or more computing device(s)1205thereof) can include one or more processors1210and a memory1215. The one or more processors1210can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory1215can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof. The memory1215can store information that can be accessed by the one or more processors1210. For instance, the memory1215(e.g., one or more non-transitory computer-readable storage mediums, memory devices) can store data1220that can be obtained, received, accessed, written, manipulated, created, or stored. The data1220can include, for instance, sensor data, two-dimensional data, three-dimensional, image data, LiDAR data, model parameters, simulation data, trajectory data, contextual data, potential trajectories, sampled trajectories, probability data, or any other data or information described herein. In some implementations, the computing system1200can obtain data from one or more memory device(s) that are remote from the computing system1200. The memory1215can also store computer-readable instructions1225that can be executed by the one or more processors1210. The instructions1225can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, the instructions1225can be executed in logically or virtually separate threads on processor(s)1210. For example, the memory1215can store instructions1225that when executed by the one or more processors1210cause the one or more processors1210(the computing system1200) to perform any of the operations, functions, or methods/processes described herein, including, for example, planning trajectories, such as by implementing a trajectory planner510, etc. According to an aspect of the present disclosure, the computing system1200can store or include one or more machine-learned models1235. As examples, the machine-learned models1235can be or can otherwise include various machine-learned models such as, for example, regression networks, generative adversarial networks, neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, or other types of models including linear models or non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks, or other forms of neural networks. For example, the computing system1200can include one or more models of a trajectory planner510, such as are discussed above with respect toFIGS.5and6. In some implementations, the computing system1200can receive the one or more machine-learned models1235from the machine-learning computing system1300over network(s)1400and can store the one or more machine-learned models1235in the memory1215. The computing system1200can then use or otherwise implement the one or more machine-learned models1235(e.g., by processor(s)1210). In particular, the computing system1200can implement the machine-learned model(s)1235to plan trajectories, etc. The machine learning computing system1300can include one or more computing devices1305. The machine learning computing system1300can include one or more processors1310and a memory1315. The one or more processors1310can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory1315can include one or more non-transitory computer-readable storage media, such as RAM, ROM, EEPROM, EPROM, one or more memory devices, flash memory devices, etc., and combinations thereof. The memory1315can store information that can be accessed by the one or more processors1310. For instance, the memory1315(e.g., one or more non-transitory computer-readable storage mediums, memory devices) can store data1320that can be obtained, received, accessed, written, manipulated, created, or stored. The data1320can include, for instance, sensor data, two-dimensional data, three-dimensional, image data, LiDAR data, model parameters, simulation data, data associated with models, trajectory data, data associated with graphs and graph nodes, acceleration profiles, algorithms, cost data, goal data, probability data, or any other data or information described herein. In some implementations, the machine learning computing system1300can obtain data from one or more memory device(s) that are remote from the machine learning computing system1300. The memory1315can also store computer-readable instructions1325that can be executed by the one or more processors1310. The instructions1325can be software written in any suitable programming language or can be implemented in hardware. Additionally, or alternatively, the instructions1325can be executed in logically or virtually separate threads on processor(s)1310. For example, the memory1315can store instructions1325that when executed by the one or more processors1310cause the one or more processors1310(the computing system) to perform any of the operations or functions described herein, including, for example, training a machine-learned trajectory planner model, planning vehicle motion trajectories, etc. In some implementations, the machine learning computing system1300includes one or more server computing devices. If the machine learning computing system1300includes multiple server computing devices, such server computing devices can operate according to various computing architectures, including, for example, sequential computing architectures, parallel computing architectures, or some combination thereof. In addition, or alternatively to the model(s)1235at the computing system1200, the machine learning computing system1300can include one or more machine-learned models1335. As examples, the machine-learned models1335can be or can otherwise include various machine-learned models such as, for example, regression networks, generative adversarial networks, neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, or other types of models including linear models or non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks, or other forms of neural networks. For example, the computing system1200can include one or more models of a trajectory planner510, such as are discussed above with respect toFIGS.5and6. In some implementations, the machine learning computing system1300or the computing system1200can train the machine-learned models1235or1335through use of a model trainer1340. The model trainer1340can train the machine-learned models1235or1335using one or more training or learning algorithms. One example training technique is backwards propagation of errors. In some implementations, the model trainer1340can perform supervised training techniques using a set of labeled training data. In other implementations, the model trainer1340can perform unsupervised training techniques using a set of unlabeled training data. By way of example, the model trainer1340can train the machine-learned trajectory generation model through unsupervised energy minimization training techniques using an objective function (e.g., as described herein). The model trainer1340can perform a number of generalization techniques to improve the generalization capability of the models being trained. Generalization techniques include weight decays, dropouts, or other techniques. The computing system1200and the machine learning computing system1300can each include a communication interface1230and1350, respectively. The communication interfaces1230/1350can be used to communicate with one or more systems or devices, including systems or devices that are remotely located from the computing system1200and the machine learning computing system1300. A communication interface1230/1350can include any circuits, components, software, etc. for communicating with one or more networks (e.g.,1400). In some implementations, a communication interface1230/1350can include, for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software or hardware for communicating data. The network(s)1400can be any type of network or combination of networks that allows for communication between devices. In some embodiments, the network(s) can include one or more of a local area network, wide area network, the Internet, secure network, cellular network, mesh network, peer-to-peer communication link or some combination thereof and can include any number of wired or wireless links. Communication over the network(s)1400can be accomplished, for instance, through a network interface using any type of protocol, protection scheme, encoding, format, packaging, etc. FIG.11illustrates one example system1100that can be used to implement the present disclosure. Other systems can be used as well. For example, in some implementations, the computing system1200can include the model trainer1340and the training data1345. In such implementations, the machine-learned models1335can be both trained and used locally at the computing system1200. As another example, in some implementations, the computing system1200is not connected to other computing systems. In addition, components illustrated or discussed as being included in one of the computing systems1200or1300can instead be included in another of the computing systems1200or1300. Such configurations can be implemented without deviating from the scope of the present disclosure. The use of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. Computer-implemented operations can be performed on a single component or across multiple components. Computer-implemented tasks or operations can be performed sequentially or in parallel. Data and instructions can be stored in a single memory device or across multiple memory devices. While the present subject matter has been described in detail with respect to specific example embodiments and methods thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. Moreover, terms are described herein using lists of example elements joined by conjunctions such as “and,” “or,” “but,” etc. It should be understood that such conjunctions are provided for explanatory purposes only. Lists joined by a particular conjunction such as “or,” for example, can refer to “at least one of” or “any combination of” example elements listed therein. Also, terms such as “based on” should be understood as “based at least in part on”. | 115,574 |
11858537 | The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION OF THE DISCLOSURE Specific structural and functional descriptions of the present disclosure disclosed herein are only for illustrative purposes. The present disclosure may be embodied in many different forms without departing from the spirit and significant characteristics of the present disclosure. Therefore, various forms of the present disclosure are disclosed only for illustrative purposes and should not be construed as limiting the present disclosure. Reference will now be made in detail to various foils of the present disclosure, specific examples of which are illustrated in the accompanying drawings and described below, since the present disclosure can be variously modified in many different forms. While the present disclosure will be described in conjunction with exemplary forms thereof, it is to be understood that the present description is not intended to limit the present disclosure to those exemplary forms. On the contrary, the present disclosure is intended to cover not only the exemplary forms, but also various alternatives, modifications, equivalents and other forms that may be included within the spirit and scope of the present disclosure as defined by the appended claims. It will be understood that, although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For instance, a first element discussed below could be termed a second element without departing from the teachings of the present disclosure. Similarly, the second element could also be termed the first element. It will be understood that when an element is referred to as being “coupled” or “connected” to another element, it can be directly coupled or connected to the other element or intervening elements may be present therebetween. In contrast, it should be understood that when an element is referred to as being “directly coupled” or “directly connected” to another element, there are no intervening elements present. Other expressions that explain the relationship between elements, such as “between”, “directly between”, “adjacent to”, or “directly adjacent to”, should be construed in the same way. The terminology used herein is for the purpose of describing particular forms only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise”, “include”, “have”, etc. when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or combinations thereof but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or combinations thereof. Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The controller according to exemplary forms of the present disclosure may be implemented using a non-volatile memory (not shown) configured to store data regarding an algorithm for controlling the operations of a variety of components of a vehicle or software instructions for reproducing the algorithm and a processor (not shown) configured to execute the following operations using the data stored in the memory. The memory and the process may be implemented as separate chips, respectively. As an alternative, the memory and the processor may be implemented as a single integrated chip. The processor may be one or more processors. Hereinafter, a method of operating a foldable accelerator pedal device in manual driving mode of an autonomous driving vehicle according to exemplary forms of the present disclosure will be described in detail with reference to the accompanying drawings. In an autonomous driving vehicle, a driver may select one mode from manual driving mode in which the driver manually drives a vehicle and autonomous driving mode in which the driver does not drive the vehicle but the vehicle drives to a destination by itself. In addition, as illustrated inFIG.1, the autonomous driving vehicle may be provided with a foldable accelerator pedal device10. The foldable accelerator pedal device10includes a pedal housing11and a pedal pad12rotatably connected to the pedal housing11. The pedal housing11may be disposed to be located in the lower space of the driver's seat inside the cabin, and the pedal pad12is operated by a foot of the driver. In the autonomous driving vehicle provided with the foldable accelerator pedal device10, in the manual driving mode, the driver operates the pedal pad12with his or her foot. In this regard, the pedal pad12must be in a pop-up state, in which the pedal pad12protrudes from the pedal housing11, so as to be exposed toward the driver. In addition, in the autonomous driving mode, the pedal pad12must be in a hiding state in which the pedal pad12is inserted into the pedal housing11without protruding toward the driver in order to provide the driver with a comfortable rest, inhibit erroneous operations, and ensure safety. In this state, the pedal pad12must not protrude toward the driver. In order to realize the pop-up state and the hiding state of the pedal pad12, the foldable accelerator pedal device10may further include an actuator13of the foldable accelerator pedal device. The actuator13serves to forcibly rotate the pedal pad12, and may be a motor. The foldable accelerator pedal device10may also be provided with an accelerator pedal controller14. The operation of the actuator13is controlled by the accelerator pedal controller14and the pedal pad12rotates about the pedal housing11in response to the operation of the actuator13, so that the pedal pad12may be converted to the pop-up state or the hiding state. The accelerator pedal controller14is configured to be able to transmit signals to a vehicle controller30and receive signals therefrom. The vehicle controller30has information regarding the driving speed limit of a road input by a navigation device and a camera41, driving speed information input by a vehicle speed sensor42, and signal information input by an accelerator position sensor (APS)43. In addition, the vehicle controller30may control the operation of a warning device20. The warning device20may be a warning light, a warning sound generator, a display, or the like. A warning generated by the warning device20may include at least one of a visual warning or an audible warning. One form according to the present disclosure is characterized in that, in a situation in which an autonomous driving vehicle provided with the foldable accelerator pedal device10is being driven in the manual driving mode, when the speed of the autonomous driving vehicle exceeds the driving speed limit of a road, a safety mode may be activated using the actuator13of the foldable accelerator pedal device10provided for the folding function of the pedal pad12, and the driver may be notified of an overspeeding situation by the activation of the safety mode, so that an accident due to the driving at an overspeed may be inhibited. That is, as illustrated inFIGS.1to3, a control method according to a first form of the present disclosure includes: an overspeeding determination step of determining whether or not the driving speed of the autonomous driving vehicle provided with the foldable accelerator pedal device10exceeds a road speed limit in a situation in which the autonomous driving vehicle is in the manual driving mode; and an accident prevention step of, when it is determined in the overspeeding determination step that the driving speed exceeds the road speed limit, operating, by the foldable accelerator pedal device10, the pedal pad12using the actuator13actuated for a pop-up operation and a hiding operation of the pedal pad12and activating the safety mode for sending a warning (or alert) to the driver by the operation of the pedal pad12. The safety mode activated in the accident prevention step includes a haptic mode, a pedal pad protrusion reducing mode in which the protrusion of the pedal pad12is reduced, and a pedal pad hiding mode in which the pedal pad12is hidden. The driving mode of the autonomous driving vehicle may be configured so as to be changed between the autonomous driving mode and the manual driving mode, for example, by the operation of a mode switch. In addition, the change between the autonomous driving mode and the manual driving mode may be forcibly performed under the control of the vehicle controller30in order to inhibit an accident in an emergency or during driving. The haptic mode activated in the accident prevention step may be one of a vibration mode and a tick mode of the pedal pad12. The haptic mode may be activated by repeatedly applying rapid signal having opposite phases to the actuator13under the control of the accelerator pedal controller14so that the pedal pad12repeatedly rotates in a pop-up direction and a hiding direction. When the haptic mode is activated in this manner, the initial progressing direction of the pedal pad12may be determined so that the initial operation of the pedal pad12is in the hiding direction with respect to the popped-up state of the pedal pad12. Consequently, the driver may rapidly sense a haptic warning without a time delay. When the initial progressing direction of the pedal pad12is determined so that the initial operation of the pedal pad12is performed in the pop-up direction on the basis of the popped-up state of the pedal pad12, the driver may not sense the haptic warning directly through the pedal pad12and sense the haptic warning when the pedal pad12moves in the hiding direction thereafter, thereby leading to a time delay. In addition, in the protrusion reducing mode of the pedal pad12activated in the accident prevention step, the actuator13is actuated under the control of the accelerator pedal controller14so that the pedal pad12is forced to hide at a predetermined angle in the pedal housing11. When the protrusion of the pedal pad12in a normal state is 100% with respect to the popped-up state of the pedal pad12, the protrusion of the pedal pad12in the protrusion reducing mode of the pedal pad12is about 80%. When the protrusion of the pedal pad12is forcibly reduced by a predetermined angle in the pop-up of the pedal pad12, even though the driver fully operates the pedal pad12, the full stroke of the pedal pad12may be adjusted to be about 80% of that of the normal state, thereby forcibly reducing the acceleration and inhibiting an accident that would otherwise be caused by the overspeed. In addition, the hiding mode of the pedal pad12activated in the accident prevention step is a process of fundamentally inhibiting the driver's operation of the pedal pad12. The hiding mode of the pedal pad12may be activated in an emergency in which the possibility of an accident is highest. In the overspeeding determination step, comparing the driving speed and the road speed limit may include a first checking step, a second checking step, and a third checking step continuously performed at time differences on a road having the same speed limit. In the accident prevention step, different types of the safety mode may be activated through the first checking step, the second checking step, and the third checking step. Hereinafter, the control method according to the first form of the present disclosure will be described with reference toFIG.3. In a situation in which an autonomous driving vehicle provided with the foldable accelerator pedal device10is started (step S1), when the autonomous driving vehicle drives in the manual driving mode (step S2), the vehicle controller30checks, a first time, the current driving speed of the vehicle using information regarding the road speed limit obtained by the navigation device and the camera41and information obtained by the vehicle speed sensor42(step S3) and determines whether or not the vehicle is currently in an overspeeding state, i.e. whether or not the current driving speed of the vehicle exceeds the road speed limit (step S4). When it is determined in the step S4that the driving speed of the vehicle exceeds the road speed limit, the vehicle controller30determines whether or not an APS signal is generated by the APS43(step S5), and when it is determined that no APS signal is generated, sends a warning to the driver by controlling only the warning device20to operate (step S6). The situation in which no APS signal is generated is a situation in which the driver does not operate the pedal pad12. That is, even in the case that the driving speed exceeds the road speed limit, when the driver does not operate the pedal pad12, the driving speed of the vehicle gradually decreases to be finally less than the road speed limit. In addition, the situation in which no APS signal is generated may be determined to be a situation in which the driver does not operate the pedal pad12in order to reduce the driving speed by recognizing the overspeeding state. Thus, when no APS signal is generated in a situation in which the driving speed exceeds the road speed limit, the controlling process is performed to send a warning to the driver by only operating the warning device20. The warning generated by the warning device20may include at least one of a visual warning using a display and a warning light or an audible warning using sound. However, when it is determined in step S5that the APS signal is generated, the haptic mode is performed (step S7). In the haptic mode, the vehicle controller30sends information regarding this situation to the accelerator pedal controller14, the actuator13is actuated under the control of the accelerator pedal controller14, and the pedal pad12is operated by the actuation of the actuator13. The haptic mode includes the vibration mode and the tick mode of the pedal pad12. The haptic mode may include the vibration mode and the tick mode of the pedal pad12, and may be configured such that one of the vibration mode and the tick mode is activated or the vibration mode and the tick mode are repeatedly activated in a periodic manner. In addition, when the haptic mode is activated, the warning device20may be controlled to operate simultaneously with the haptic mode as required, so that a warning may be sent to the driver in a more reliable manner. When the first checking step is completed, the second checking step of re-checking the driving speed at a time difference is performed (step S8). By the second checking, it is determined again that whether or not the vehicle is in an overspeeding state, i.e. whether or not the driving speed exceeds the road speed limit (step S9). When it is determined in the step S9that the driving speed does not exceed the road speed limit, it is determined that the vehicle is not in the overspeeding state, the actuation of the actuator13is stopped under the control of the accelerator pedal controller14, and the activation of the haptic mode activated after the first checking step is stopped in response to the stopped actuation of the actuator13(step S10). However, when it is determined in the step S9that the driving speed exceeds the road speed limit, it is determined that the vehicle is in a dangerous situation, i.e. the vehicle has been continuously overspeeding after the first checking step, and the protrusion reducing mode of the pedal pad12is activated by actuating the actuator13(step S11). When the protrusion reducing mode of the pedal pad12is activated, the maximum protrusion of the pedal pad12is about 80%, which is reduced by about 20% from the normal state, with respect to the popped-up state of the pedal pad12. When the protrusion of the pedal pad12is forcibly reduced by a predetermined angle in the pop-up of the pedal pad12in this manner, even in the case that the driver fully operates the pedal pad12, the full stroke of the pedal pad12is about 80% of that of the normal state. Consequently, the acceleration of the vehicle may be forcibly reduced, thereby inhibiting an accident that would otherwise be caused by the overspeed. In addition, when the protrusion reducing mode of the pedal pad12is activated, the haptic mode may be simultaneously activated in order to more reliably send a warning to the driver. The protrusion reducing mode of the pedal pad12activated in the second checking step is a process performed in order to more actively inhibit an accident than the haptic mode activated in the first checking step. After the second checking step is completed, the third checking step of re-checking the driving speed at a time difference is performed (step S12). By the third checking, it is determined again that whether or not the driving speed exceeds the road speed limit (step S13). When it is determined in the step S13that the driving speed does not exceed the road speed limit, it is determined that the vehicle is not in the overspeeding state, the actuation of the actuator13is stopped under the control of the accelerator pedal controller14, the activation of the haptic mode activated after the second checking step is stopped in response to the stopped actuation of the actuator13, and the protrusion of the pedal pad12returns to the pop-up position of the normal state in response to the rotation of the pedal pad12caused by the operation of the actuator13(step S14). However, when it is determined in the step S13that the driving speed exceeds the road speed limit, it is determined that the vehicle is in a dangerous situation, i.e. the vehicle has been continuously overspeeding after the second checking step, and the hiding mode of the pedal pad12is forcibly activated by the actuation of the actuator13(step S15). When the hiding mode of the pedal pad12is activated, the pedal pad12is completely retracted into the pedal housing11, thereby fundamentally inhibiting the driver from operating the pedal pad12. Consequently, an accident prevention process stronger than that performed in the protrusion reducing mode of the pedal pad12activated in the second checking step is performed. When the hiding mode of the pedal pad12is performed, the controlling process may be performed to simultaneously perform a visual warning using a display and a warning light and an audible warning using sound in order to more reliably send a warning to the driver. In addition, according to the first form of the present disclosure, after the hiding mode of the pedal pad12is activated by the third checking step, the vehicle may be safely driven to a destination by changing the driving mode from the manual driving mode to the autonomous driving mode under the forced control of the vehicle controller30(step S16). In addition, as illustrated inFIGS.1,2, and4, a control method according to a second form of the present disclosure includes: an overspeeding determination step of determining whether or not the driving speed of the autonomous driving vehicle provided with the foldable accelerator pedal device10exceeds a road speed limit in a situation in which the autonomous driving vehicle is in the manual driving mode; and an accident prevention step of, when it is determined in the overspeeding determination step that the driving speed exceeds the road speed limit, operating, by the foldable accelerator pedal device10, the pedal pad12using the actuator13actuated for a pop-up operation and a hiding operation of the pedal pad12and activating the safety mode for sending a warning to the driver by the operation of the pedal pad12. The safety mode activated in the accident prevention step includes haptic mode and pedal pad hiding mode in which the pedal pad12is hidden. The driving mode of the autonomous driving vehicle may be configured so as to be changed between the autonomous driving mode and the manual driving mode, for example, by the operation of a mode switch. In addition, the change between the autonomous driving mode and the manual driving mode may be forcibly performed under the control of the vehicle controller30in order to inhibit an accident in an emergency or during driving. The haptic mode activated in the accident prevention step may be one of a vibration mode and a tick mode of the pedal pad12. The haptic mode may be activated by repeatedly applying rapid signal having opposite phases to the actuator13under the control of the accelerator pedal controller14so that the pedal pad12repeatedly rotates in a pop-up direction and a hiding direction. When the haptic mode is activated in this manner, the initial progressing direction of the pedal pad12may be determined so that the initial operation of the pedal pad12is in the hiding direction with respect to the popped-up state of the pedal pad12. Consequently, the driver may rapidly sense a haptic warning without a time delay. When the initial progressing direction of the pedal pad12is determined so that the initial operation of the pedal pad12is performed in the pop-up direction on the basis of the popped-up state of the pedal pad12, the driver may not sense the haptic warning directly through the pedal pad12and sense the haptic warning when the pedal pad12moves in the hiding direction thereafter, thereby leading to a time delay. In addition, the hiding mode of the pedal pad12activated in the accident prevention step is a process of fundamentally inhibiting the driver's operation of the pedal pad12. The hiding mode of the pedal pad12may be activated in an emergency in which the possibility of an accident is the highest. In the overspeeding determination step, comparing the driving speed and the road speed limit may include a first checking step and a second checking step continuously performed on a road having the same speed limit at a time difference. In the accident prevention step, different types of the safety mode may be activated through the first checking step and the second checking step. Hereinafter, the control method according to the second form of the present disclosure will be described with reference toFIG.4. In a situation in which an autonomous driving vehicle provided with the foldable accelerator pedal device10is started (step S101), when the autonomous driving vehicle drives in the manual driving mode (step S102), the vehicle controller30checks, a first time, the current driving speed of the vehicle using information regarding the road speed limit obtained by the navigation device and the camera41and information obtained by the vehicle speed sensor42(step S103) and determines whether or not the vehicle is currently in an overspeeding state, i.e. whether or not the current driving speed of the vehicle exceeds the road speed limit (step S104). When it is determined in the step S104that the driving speed of the vehicle exceeds the road speed limit, the vehicle controller30determines whether or not an APS signal is generated by the APS43(step S105), and when it is determined that no APS signal is generated, sends a warning to the driver by controlling only the warning device20to operate (step S106). The situation in which no APS signal is generated is a situation in which the driver does not operate the pedal pad12. That is, even in the case that the driving speed exceeds the road speed limit, when the driver does not operate the pedal pad12, the driving speed of the vehicle gradually decreases to be finally less than the road speed limit. In addition, the situation in which no APS signal is generated may be determined to be a situation in which the driver does not operate the pedal pad12in order to reduce the driving speed by recognizing the overspeeding state. Thus, when no APS signal is generated in a situation in which the driving speed exceeds the road speed limit, the controlling process is performed to send a warning to the driver by only operating the warning device20. The warning generated by the warning device20may include at least one of a visual warning using a display and a warning light, or an audible warning using sound. However, when it is determined in step S105that the APS signal is generated, the haptic mode is performed (step S107). In the haptic mode, the vehicle controller30sends information regarding this situation to the accelerator pedal controller14, the actuator13is actuated under the control of the accelerator pedal controller14, and the pedal pad12is operated by the actuation of the actuator13. The haptic mode includes the vibration mode and the tick mode of the pedal pad12. The haptic mode may include the vibration mode and the tick mode of the pedal pad12, and may be configured such that one of the vibration mode and the tick mode is activated or the vibration mode and the tick mode are repeatedly activated in a periodic manner. In addition, when the haptic mode is activated, the warning device20may be controlled to operate simultaneously with the haptic mode as required, so that a warning may be sent to the driver in a more reliable manner. When the first checking step is completed, the second checking step of re-checking the driving speed at a time difference is performed (step S108). By the second checking, it is determined again that whether or not the vehicle is in an overspeeding state, i.e. whether or not the driving speed exceeds the road speed limit (step S109). When it is determined in the step S109that the driving speed does not exceed the road speed limit, it is determined that the vehicle is not in the overspeeding state, the actuation of the actuator13is stopped under the control of the accelerator pedal controller14, and the activation of the haptic mode activated after the first checking step is stopped in response to the stopped actuation of the actuator13(step S110). However, when it is determined in the step S109that the driving speed exceeds the road speed limit, it is determined that the vehicle is in a dangerous situation, i.e. the vehicle has been continuously overspeeding after the first checking step, and the hiding mode of the pedal pad12is forcibly activated by the actuation of the actuator13(step S111). When the hiding mode of the pedal pad12is activated, the pedal pad12is completely retracted into the pedal housing11, thereby fundamentally inhibiting the driver from operating the pedal pad12. Consequently, an accident prevention process stronger than that performed in the haptic mode activated in the first checking step is performed. When the hiding mode of the pedal pad12is performed, the controlling process may be performed to simultaneously perform a visual warning using a display and a warning light and an audible warning using sound in order to more reliably send a warning to the driver. In addition, according to the second form of the present disclosure, after the hiding mode of the pedal pad12is activated by the second checking step, the vehicle may be safely driven to a destination by changing the driving mode from the manual driving mode to the autonomous driving mode under the forced control of the vehicle controller30(step S112). As set forth above, in the method of controlling the operation of a foldable accelerator pedal device in manual driving mode of an autonomous driving vehicle according to the present disclosure, in a situation in which an autonomous driving vehicle provided with the foldable accelerator pedal device10is being driven in manual driving mode, when the speed of the autonomous driving vehicle exceeds the driving speed limit of a road, a safety mode is activated using the actuator13provided for a folding function of the pedal pad12in order to warn a driver of this situation and thus inhibit an accident. In addition, in the method of controlling the operation of a foldable accelerator pedal device in manual driving mode of an autonomous driving vehicle according to the present disclosure, the safety mode may be activated using the actuator13of the foldable accelerator pedal device10provided for the folding function of the pedal pad12, without having to further include an additional actuator, such as a coin motor, so that the volume, weight, and cost of the foldable accelerator pedal device may be reduced. In addition, in the method of controlling the operation of a foldable accelerator pedal device in manual driving mode of an autonomous driving vehicle according to the present disclosure, the safety mode include the haptic mode, pedal the pad protrusion reducing mode (i.e. a mode in which the protrusion of the pedal pad is reduced), and the pedal pad hiding mode by operating the actuator13in the situation in which the speed of the autonomous driving vehicle exceeds the driving speed limit of a road, thereby diversifying the safety function, thereby improving the product value of the foldable accelerator pedal device. Although the specific forms of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the disclosure as disclosed in the accompanying claims. | 29,892 |
11858538 | DETAILED DESCRIPTION FIG.1shows a station2of a cable car1and a part of the transport route of the cable car1with a cable car tower3before the station entrance. Vehicles4of the cable car1are transported by means of a hoisting cable5, which is turned round in the stations2via cable pulleys. One cable pulley6is driven by a cable car drive7, wherein the cable car drive7is controlled by a cable car control unit8. At the station entrance9, an outer guide rail trumpet10may be arranged, in which the detachable grip11of the vehicle4is to enter in order to guide the vehicle4into the station2. For the present teaching, it is irrelevant whether the vehicles4are permanently coupled to the hoisting cable5, or whether the vehicles4can be coupled to the hoisting cable5(for example by means of well-known detachable grips). Likewise, it is irrelevant for the present teaching whether persons and/or material are transported by the cable car1. A number of support cables between the stations on which the vehicles4are moved may also be provided. On a vehicle4, a sensor12for detecting the deflection of the vehicle4from a vertical is arranged. Above all, the deflection α in the direction y transverse to the transport direction×(FIG.2) is of interest. But also the largest deflection α, which does not necessarily occur in the transverse direction y, could be detected. Any suitable sensor is may be used for this purpose, for example a position sensor or an acceleration sensor. In the case of an acceleration sensor, the values supplied by the sensor12at a specific sampling rate are written to a memory on the vehicle4, for example. With these values, it is then always possible to deduce a current deflection α. With the sensor12, the deflection a is generally detected and transmitted to the cable car control unit8. This is preferably done with a wireless communication connection, such as radio. For this purpose, a transmitting device13can be provided on the vehicle4, which transmits the detected deflection α of the vehicle4to a receiver14in the station2or in the area near the station2. The receiver14is connected to the cable car control unit8and forwards the received signal or the information transmitted therein to the cable car control unit8. In addition, the occurrence of wind gusts B is detected before the station entrance9. For this purpose, for example, a wind sensor15may be provided before the station entrance9, for example at the last cable car tower3before the station2. The wind sensor15transmits the detected values to the cable car control unit8via a suitable communication connection. For this purpose, a wired or wireless communication connection can be provided. For example, in the case of a wireless communication connection, the wind sensor15could send its values via radio to the receiver14in the station2. The wind sensor15measures either the wind speed vwor directly wind gusts B. Wind gust B is understood to be the temporal change of the wind speed vw. If the wind speed vwis detected, a value for the wind gust B can be obtained by time derivative B=dvwdt. This can also be done in the cable car control unit8. In principle, a wind sensor15for detecting the wind speed vwor a value for a wind gust could also be provided on the vehicle4, whereas substraction of the airstream of the vehicle4would be advantageous. In this case, the wind speed vwor the value for the wind gust could also be sent with transmitting device13to the receiver14of the station2and thus to the cable car control unit8. It can also be provided to detect the wind direction with the wind sensor15. Thus, not only the occurrence of wind gusts B can be detected, but also from which direction R the wind gust B occurs. The direction R of a wind gust B can considerably influence the swinging motion of the vehicle4. For example, a wind gust B, which acts on the vehicle4in the transport direction x along the route, from the front or from behind, may be significantly worse than a wind gust B in the transverse direction y. If a wind gust B hits the vehicle4laterally, the deflection in the transverse direction y is directly given, but the contact surface of, for example, a chair as vehicle4is very small. However, if a wind gust B hits a chair with an open bubble frontally, the contact surface is much larger, which can also lead to a massive deflection in the transport direction x and transverse direction y. The cable car control unit8can now combine the current deflection α and the occurrence of wind gusts B and can control the cable car drive7accordingly. To this end, the direction R of the wind gust B can also be taken into account. It has been found in the operation of a cable car1in practice that in particular the combination of deflection α, for example, due to a one-sided loading of the vehicle4, and the occurrence of wind gusts B in the area of the station entrance9, possibly as a function of the direction R of the wind gust B, is especially dangerous. In this case, the vehicle4does not even have to approach the station entrance9while swinging. However, if the vehicle4swings on the hoisting cable5, then the greatest deflection of the swinging movement could be used as a deflection α. At a certain deflection α and when certain wind gusts B occur, large swinging motions of the vehicle4in the transverse direction y may occur, which may result in the vehicle4touching a stationary component of the station2at the station entrance9or even missing the outer guide rail trumpet10. Both can lead to severe accidents and damage to the cable car1and/or the vehicle4. By the inventive combination of the deflection α with the detection of the occurrence of wind gusts B the latter can be effectively prevented. It makes sense to specify an allowable deflection αmaxand an allowable maximum wind gust Bmax(FIG.2). If both permissible values are exceeded before the station entrance9, then for example the cable car drive7can be controlled by the cable car control unit8in order to reduce the driving speed or to stop the cable car1. Of course, several thresholds could be defined for the deflection α and/or for the wind gust B. Thus, the current state of the vehicle4and the wind at the station entrance9can be classified from being less critical to critical. In the case of being less critical, for example, the driving speed is reduced (also possible in several stages) and, in the case of critical conditions, the cable car1is stopped. The cable car control unit8could of course weight the deflection α and the wind gust B differently, for example in order to take account of special conditions or the design of a cable car1. The latter, or fixed thresholds, could also be changed in the operation of the cable car1, in order to consider information acquired during the operation of the cable car. In addition, the direction R of the wind gust B can also be detected and taken into account in the cable car control unit8during the control of the cable car drive7. For example, for different directions R, or ranges of directions R, different thresholds for the deflection α and/or the wind gust B could be deposited. But it could also be provided to take into account only wind gusts B from a certain direction R, or from a range of directions R. For example, only wind gusts B in the direction of travel x or only wind gusts B in a range of directions RB around the direction of travel x could be taken into account, as shown inFIG.3. In the cable car control unit8the wind gust B could also be weighted differently depending on the direction R, so that critical directions R of wind gusts B are more critical than others. Which wind gusts B with which direction R are taken into account in which manner in the cable car control unit8can, of course, be defined and can depend on the cable car type, on the surroundings of the cable car1, on the operating parameters of the cable car, etc. Of course, this can also be changed during operation of the cable car1. The detection of the wind speed vwor the wind gust B, and optionally the direction R, and the deflection a preferably takes place in such a distance before the station entrance9, that the vehicle4may still be safely braked before the station2. On the other hand, the detection should not take place at too large a distance before the station entrance9, because in this case the detected values would no longer have any relevance for the situation of the station entrance9. Which distance is appropriate, of course, depends on the respective cable car1. In most cases, the detection will have to be aimed for in a range smaller than 80 m before the station entrance9. The values of the deflection α and the detection of the wind gust B, and optionally the direction R, are therefore preferably detected at least by the braking distance BW of the vehicle4before the station2(FIG.1). The braking distance BW of the vehicle4is usually known. In conventional cable cars1with maximum travel speeds of typically 7 m/s, the braking distance BW is approximately 25-40 m in the event of an emergency stop. Often there is a cable car tower3in this area before the station2. Thus, the detection of the wind speed vwor the wind gust B, and optionally the direction R, could occur at a cable car tower3before the station2. The receiver14is therefore preferably arranged such that the transmission range of the transmitting unit13is sufficient to be able to receive the deflection α from a sufficiently large distance. Preferably, the receiver14is arranged inside the station2, but could also be arranged in the area of the station2before the station entrance9. For example, the receiver14could also be arranged on a cable car tower3before the station2and be connected to the cable car control unit8via a corresponding communication line. Especially advantageous for the transmission of information from the vehicle4to the station2is the use of radio transponders RF as a transmitting device13on the vehicle4, such as RFID (Radio Frequency Identification) transponder (often called RFID tag), as is explained by means ofFIG.4. A memory unit33is provided in the radio transponder RF on the vehicle4, in which, for example, values for the deflection α and optionally also values for the wind speed vwor for wind gusts B, and optionally the direction R, can be stored. The sensor12for detecting the deflection α could store its values, for example, in the memory unit33of the radio transponder, and a wind sensor15could do this as well, if the latter is provided on the vehicle4. Such a radio transponder RF can have a very small size and can therefore be used very flexibly. In the effective range of a transmitting antenna31, which transmits a polling signal34, the radio transponder RF responds with a response signal35, which comprises the deflection α, and optionally also a value of the wind gusts B and possibly the direction R. The response signal35is received by the transmitting antenna31and forwarded to a reader30which decodes the required values from the response signal35. The reader30is connected to the cable car control unit8and can send the obtained values to the cable car control unit8. A plurality of transmitting antennas31can be connected to a reader30, as indicated inFIG.4. The receiver14in the station2could therefore be designed as a reader30with a transmitting antenna31. The transmitting antenna31would have to be designed in such a way that the polling signal34is transmitted as far from the station2to the route that information from the vehicle4can be obtained as early as possible. The supply of a vehicle4with electrical energy is cumbersome in practice, because usually an energy storage device must be provided on the vehicle4and the energy storage must be charged, for example, during the travel through the station. Therefore, it is often desirable in a cable car1not to use an electric power supply on the vehicles4. This contradicts of course the requirement to detect the deflection α of the vehicle4and to transmit the latter to the cable car control unit8. In a particularly advantageous embodiment, therefore, a passive radio transponder is used on the vehicle4, for example a passive RFID transponder, because no power supply of the radio transponder RF on the vehicle4is necessary in this case. A passive radio transponder is active only in the effective range of a transmitting antenna31of a reader30spanning an electromagnetic field, since the passive radio transponder RF acquires the electrical energy to operate from the electromagnetic signal emitted by the transmitting antenna31, which is received with a receiving antenna32in the radio transponder RF. Thus, the sensor12, and possibly also a wind sensor15, on the vehicle4could receive the required electrical energy from the passive radio transponder RF. When the vehicle4approaches the station2, the passive radio transponder RF at the vehicle4reaches the effective range of the transmitting antenna31, whereby the power supply is enabled. Then, the sensor12, and possibly also a wind sensor15, are read and the detected value of the deflection α, and possibly the occurrence of a wind gust B and a direction R, are sent with the response signal35to the reader30. There are also radio transponders RF with a sensor input, so that a sensor12, and possibly also a wind sensor15, can also be connected directly to the radio transponder RF in order to be read out directly via the radio transponder RF. Of course, other information could also be stored in the memory unit33of the radio transponder RF. For example, a unique vehicle identifier FID could be stored in each vehicle4in the storage unit33, which could also be transmitted to the cable car control unit8. If the effective range of the transmitting antenna31does not reach far enough to poll the information required by the vehicle with a radio transponder RF before the braking distance BW, it could also be provided to arrange the reader30with a transmitting antenna31outside the station2, for example, at the last cable car tower3before the station2. The reader30may be connected to the cable car control unit8or the receiver14in the station2(wireless or wired) to transmit the values of the deflection α, and possibly also the wind gust B and a direction R. In addition, values of the deflection a along the route between the stations2could also be collected with a radio transponder RF. If a power supply on the vehicle4were present, for example, the sensor12could be read out at a predetermined sampling rate and stored in the memory unit33. In the area of the station2, the memory unit33can then be read out and the stored values can be analyzed by the cable car control unit8. From this, the cable car control unit8can obtain important information about the conditions present along the route, which can also be used to control the cable car drive7. When using a passive radio transponder RF, a reader30could be arranged at least on certain cable car towers along the route, whereby the sensor12, and preferably also the vehicle identifier FID, can be read in the area of the cable car tower. The thus detected sensor value can be stored in the memory unit33and/or can be transmitted from the cable car tower to the cable car control unit8. In the station2, the storage unit33could then be read out with a reader30. In this case, an electric power supply is required on the cable car tower and possibly also a data connection to the cable car control unit8. The communication path between the vehicle4and the cable car control8, i.e. for example, the cable car control unit8, the reader30, the transmitting antenna31, the radio transponder RF, can of course also be designed to be functionally failsafe, for example, according to a required safety integrity level (SIL) in order to ensure safe communication in the sense of functional safety (i.e., in the sense that an error is detected immediately and the system then preferably switches to a safe state). For this purpose, well-known mechanisms, such as a multi-channel hardware, redundancy in the data, error detection and error correction methods in the data transmission, etc., can be provided. For example, a timestamp could be added to each signal34,35or to the data carried therein. If the time bases of the reader30and the cable car control unit8are synchronized, a deviation of the time stamp to the synchronized control time can be detected and could, e.g., result in a shutdown of the cable car1. It may further be provided that the memory unit33of the radio transponder RF must be read several times within a predetermined period of time in order to verify the transmitted data. The data transmitted in the response signal35could be protected by redundant data, for example, by a CRC (cyclic redundancy code). Of course, further measures to ensure the functional safety are also conceivable. | 16,921 |
11858539 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS With reference to the accompanying drawings, the following describes preferred embodiments of the present invention in detail. In the description of the drawings, identical elements and features will be denoted by identical reference signs and redundant explanations will be omitted. An overhead transport vehicle (transport vehicle)1illustrated inFIG.1travels along a traveling rail2provided at a position higher than the floor surface such as the ceiling and the like of a cleanroom. The overhead transport vehicle1transports a FOUP (front-opening unified pod) (transported object)90as an article, between a storage facility and a predetermined load port, for example. The FOUP90has a box-shaped housing91having an opening, and a lid93covering the opening. The lid93is detachably provided with respect to the housing91. In the FOUP90, a plurality of semiconductor wafers and the like are accommodated, for example. The FOUP90includes a flange95that is held by the overhead transport vehicle1. In the following description, for the sake of convenience of description, the left-and-right direction (X-axis direction) inFIG.1is defined as the front-and-rear direction (traveling direction) of the overhead transport vehicle1. The up-and-down direction inFIG.1is defined as the vertical direction (Z-axis direction) of the overhead transport vehicle1. The depth direction inFIG.1is defined as the width direction (Y-axis direction) of the overhead transport vehicle1. The X-axis, Y-axis, and Z-axis are orthogonal to one another. As illustrated inFIG.1, the overhead transport vehicle1includes a traveling drive unit3, a horizontal drive unit5, a rotation drive unit6, an elevating drive unit7, an elevating device10, a holding device11, first fall prevention portions20, second fall prevention portions (third link portions)40, swing suppression units70, rotation mechanisms50of the first fall prevention portions20(seeFIG.2andFIG.3), moving mechanisms of the second fall prevention portions40and the swing suppression units70(seeFIG.4andFIG.5), and buffer mechanisms80. The overhead transport vehicle1is provided with a pair of frames8and8so as to cover the horizontal drive unit5, the rotation drive unit6, the elevating drive unit7, the elevating device10, and the holding device11from front and rear in the traveling direction. The pair of frames8and8define a space in which the FOUP90is accommodated below the holding device11in a state where the elevating device10is raised to an elevating end. The traveling drive unit3moves the overhead transport vehicle1along the traveling rail2. The traveling drive unit3is arranged in the traveling rail2. The traveling drive unit3drives a roller (not depicted) that travels on the traveling rail2. In the lower portion of the traveling drive unit3, the horizontal drive unit5is coupled to via a shaft3A. The horizontal drive unit5moves, in the horizontal plane, the rotation drive unit6, the elevating drive unit7, and the elevating device in the direction (width direction) that is orthogonal to an extending direction of the traveling rail2. The rotation drive unit6rotates, in the horizontal plane, the elevating drive unit7and the elevating device10. The elevating drive unit7raises and lowers the elevating device10by winding and unwinding four belts9. The traveling drive unit3may be configured by a linear motor and the like that generates propulsion on the overhead transport vehicle1. The belts9in the elevating drive unit7may use hanging members such as wires, ropes, and the like, as appropriate. The elevating device10in the present preferred embodiment is capable of being raised and lowered by the elevating drive unit7, and defines and functions as an elevating platform in the overhead transport vehicle1. The holding device11holds the FOUP90. The holding device11includes a pair of L-shaped arms12and12, hands13and13fixed to each of the arms12and12, and an opening and closing mechanism15that opens and closes the pair of arms12and12. The pair of arms12and12is coupled to the opening and closing mechanism15. The opening and closing mechanism15moves the pair of arms12and12in a direction to approach each other and a direction to separate from each other. By the operation of the opening and closing mechanism15, the pair of arms12and12advances and retracts along the X-axis direction. As a result, the pair of hands13and13fixed to the arms12and12opens and closes. In the present preferred embodiment, the height position of the holding device11(elevating device10) is adjusted so that, when the pair of hands13and13is in an opened state, the holding surfaces of the hands13are lower than the height of the lower surface of the flange95. Then, as the pair of hands13and13are in a closed state in this situation, the holding surfaces of the hands13and13advance below the lower surface of the flange95, and by raising the elevating device10in this situation, the flange95is held by the pair of hands13and13and the FOUP90is supported. As illustrated inFIG.2andFIG.3, as the first fall prevention portion20is arranged in front of the lid93, the lid93is prevented from falling from the FOUP90held by the holding device11(seeFIG.1). The first fall prevention portion20has a rotation axis in the Z-axis direction and is rotatable along a front surface90aand a side surface90bof the FOUP90between an advanced position P1 arranged in front of the lid93and a retracted position P2 arranged at a position retracted from the front of the lid93. The first fall prevention portion20is a plate-shaped member extending to a distal end portion20bwith a rotation axis20aas a base point. The first fall prevention portion20is made of a material such as stainless steel. A portion of the first fall prevention portion20is parallel to the lid93when the first fall prevention portion20is located in the advanced position P1. The rotation mechanism50of the first fall prevention portion20is accommodated in each of the pair of frames8and8. The rotation mechanism50includes a drive unit51, a drive shaft52, a first gear portion53, a second gear portion54, a link portion55, a link portion56, a third gear portion57, and an arm portion58. The rotation mechanism50converts the rotational motion of the drive unit51into a linear motion of the first fall prevention portion20(reciprocating motion between the advanced position and the retracted position). The second fall prevention portion40is arranged on the lower surface of the FOUP90, so that the FOUP90itself held by the holding device11(seeFIG.1) is prevented from falling and the lid93is prevented from falling from the FOUP90. The second fall prevention portion40is a plate-shaped member extending in the Y-axis direction and is made of a material such as stainless steel. As illustrated inFIG.2toFIG.5, the second fall prevention portion40is movable between an advanced position P3 arranged below the lid93, and a retracted position P4 arranged in a position retracted from below the lid93, that is, a position accommodated in the area of the frame8when viewed from above in the Z-axis direction. The movement of the second fall prevention portion40to the advanced position P3 and the retracted position P4 interlocks with the movement of the first fall prevention portion20to the advanced position P1 and the retracted position P2. That is, if the first fall prevention portion20advances to the advanced position P1, the second fall prevention portion40also advances to the advanced position P3, and if the first fall prevention portion20retracts to the retracted position P2, the second fall prevention portion40also retracts to the retracted position P4. The swing suppression unit70contacts and supports the side surface90bof the FOUP90, and suppresses the swing of the FOUP90held by the holding device11in the X-axis direction of the overhead transport vehicle1during traveling. As illustrated inFIG.4andFIG.5, the swing suppression unit70includes two rollers70A and70A contacting and supporting the FOUP90(seeFIG.1). The swing suppression unit70is movable between an advanced position (first position) P5 contacting with the side surface90bof the FOUP90and a retracted position (second position) P6 at a position away from the side surface90bof the FOUP90. The movement of the swing suppression unit70to the advanced position P5 and the retracted position P6 interlocks with the movement of the first fall prevention portion20to the advanced position P1 and the retracted position P2. That is, if the first fall prevention portion20advances to the advanced position P1, the swing suppression unit70also advances to the advanced position P5, and if the first fall prevention portion20retracts to the retracted position P2, the swing suppression unit70also retracts to the retracted position P6. Furthermore, the movement of the swing suppression unit70to the advanced position P5 and the retracted position P6 interlocks with the movement of the second fall prevention portion40to the advanced position P3 and the retracted position P4. That is, if the second fall prevention portion40advances to the advanced position P3, the swing suppression unit70also advances to the advanced position P5, and if the second fall prevention portion40retracts to the retracted position P4, the swing suppression unit70also retracts to the retracted position P6. The following describes the second fall prevention portion40, and the interlocking operation between the swing suppression unit70and the second fall prevention portion40. As illustrated inFIG.2toFIG.5, the moving mechanism of the second fall prevention portion40and the swing suppression unit70is accommodated in each of the pair of frames8and8(seeFIG.1), and includes a rotation shaft (drive shaft)59, a crank portion61, a supporting portion63, a connecting portion65, a first link portion62, and a second link portion66. The rotation shaft59integrally rotates with the third gear portion57that rotates in a series of advancing operations in the first fall prevention portion20(seeFIG.2andFIG.4). That is, the rotation shaft59rotates by the drive of the drive unit51. The drive unit51includes a stepping motor, for example. The crank portion61is fixed to the rotation shaft59and rotates bidirectionally by the rotation of the rotation shaft59. The supporting portion63is rotatable bidirectionally around a rotation shaft63aand supports the swing suppression unit70. The connecting portion65is coupled at one end to the crank portion61so as to be rotatable bidirectionally at a position (coupling portion65a) that is eccentric to the rotation shaft59and is coupled at the other end to a position (coupling portion65b) that is eccentric to the rotation shaft63aof the supporting portion63so as to be rotatable bidirectionally. The first link portion62is fixed to the rotation shaft59and rotates bidirectionally by the rotation of the rotation shaft59. The second link portion66is rotatable bidirectionally around the rotation shaft63aas a center. The second link portion66rotates bidirectionally around the rotation shaft63aas a center, by the movement of the connecting portion65that operates simultaneously with the rotation of the rotation shaft59. The second fall prevention portion40is fixed to both of the first link portion62and the second link portion66so as to be rotatable bidirectionally. As illustrated inFIG.4toFIG.6A, in the connecting portion65, a buffer mechanism80is provided. The buffer mechanism80allows the swing suppression unit70, which has been moved to the advanced position P5 by the drive of the drive unit51, to move to the X-axis direction after stopping the drive unit51. The buffer mechanism80includes a first coupling portion (operation portion)81, a rod-shaped portion (operation portion)82, a second coupling portion83, a first bias portion84, and a second bias portion85. The first coupling portion81and the rod-shaped portion82perform a movement of an operation amount in accordance with the amount of movement of the swing suppression unit70to the retracted position P6 direction. Specifically, the first coupling portion81and the rod-shaped portion82move (operate) to the second coupling portion83side along the extending direction of the rod-shaped portion82in accordance with the amount of movement of the swing suppression unit70to the retracted position P6 direction. In more detail, the first coupling portion81and the rod-shaped portion82move farther along the extending direction of the rod-shaped portion82to the second coupling portion83side, as the amount of movement of the swing suppression unit70to the retracted position P6 direction increases. The first coupling portion81is a rod-shaped member rotatively coupled to the supporting portion63. The rod-shaped portion82extends in one direction D. One end82aof the rod-shaped portion82is coupled to the first coupling portion81, and on the other end82bof the rod-shaped portion82, the second bias portion85is attached. The second coupling portion83includes a first body portion83athat is coupled to the crank portion61, a second body portion83bon which an insertion hole83cthrough which the rod-shaped portion82is inserted is provided and that is spaced at an interval from the first body portion83a, and a coupling portion83dthat couples the first body portion83aand the second body portion83b. The first bias portion84is able to come in contact with both of the first coupling portion81and the second body portion83bof the second coupling portion83. The first bias portion84biases the first coupling portion81so that the swing suppression unit70presses the FOUP90. In more detail, the first bias portion biases the first coupling portion81so that the swing suppression unit70moves in the direction of pressing the FOUP90. The first bias portion84is a spring. The second bias portion85, when the operation amount of the first coupling portion81and the rod-shaped portion82(the amount of movement of the first coupling portion81and the rod-shaped portion82to the second coupling portion83side in the extending direction of the rod-shaped portion82) is greater than or equal to a predetermined amount, contacts (acts on) the rod-shaped portion82, and biases the first coupling portion81and the rod-shaped portion82so that the swing suppression unit70moves in the direction of pressing the FOUP90. In more detail, the second bias portion85contacts the rod-shaped portion82and biases the first coupling portion81and the rod-shaped portion82so that the swing suppression unit70presses the FOUP90, only when the operation amount of the first coupling portion81and the rod-shaped portion82is greater than or equal to the predetermined amount. The second bias portion85has viscoelasticity and has an elastic modulus greater than that of the first bias portion84. The second bias portion85is a rubber body made of urethane rubber, for example. The second bias portion85is fixed to the other end82bof the rod-shaped portion82, and the second bias portion85moves to the first body portion83adirection between the first body portion83aand the second body portion83bsimultaneously with the movement of the first coupling portion81and the rod-shaped portion82to the second coupling portion83side. The rod-shaped portion82and the first body portion83aare able to come in contact with each other via the second bias portion85when the operation amount of the rod-shaped portion82reaches the predetermined amount (seeFIGS.6B and6C). Then, the second bias portion85is compressed by the rod-shaped portion82and the first body portion83a, when the operation amount of the rod-shaped portion82is greater than the predetermined amount. That is, the second bias portion85acts on the rod-shaped portion82when the operation amount of the rod-shaped portion82is greater than the predetermined amount, and biases the rod-shaped portion82so that the swing suppression unit70presses the FOUP90. Next, an advancing-and-retracting operation in the second fall prevention portion40and the swing suppression unit70will be described. When the third gear portion57(seeFIG.2andFIG.3) rotates as the drive unit51(seeFIG.2andFIG.3) is driven, as illustrated inFIG.4andFIG.5, the crank portion rotates in an arrow direction all in conjunction with this rotation. When the crank portion61rotates, due to the action of the connecting portion65, the supporting portion63rotates in an arrow direction a12 with the rotation shaft63aas a base point. As a result, the swing suppression unit70advances to the advanced position P5. When the swing suppression unit70is located in the advanced position P5, the rotation shaft59, the coupling portion65abetween the crank portion61and the connecting portion65, and the coupling portion65bbetween the connecting portion65and the supporting portion63are located on one straight line in the one direction (an extending direction of the connecting portion65) D (the crank portion61and the connecting portion65are located in a state of what is called a dead center). When the crank portion61and the connecting portion65are located in a state of what is called a dead center, as illustrated inFIG.3, a rotation shaft54aof the second gear portion54, a coupling portion55athat is located at a position eccentric to the rotation shaft54aand is between the second gear portion54and the link portion55, and a coupling portion55bbetween the link portion55and the link portion56are located on one straight line in one direction (an extending direction of the link portion55). At the same time the above-described crank portion61rotates, the first link portion62rotates in an arrow direction a21 with the rotation shaft59as a base point. Similarly, at the same time the supporting portion63rotates, the second link portion66rotates in an arrow direction a22 with the rotation shaft63aas a base point. As a result, the second fall prevention portion40coupled to the first link portion62and the second link portion66advances to the advanced position P3. By the above-described series of operations, the second fall prevention portion40rotatively coupled to the first link portion62and the second link portion66advances from the retracted position P4 to the advanced position P3, and the swing suppression unit70rotatively coupled to the supporting portion63advances from the retracted position P6 to the advanced position P5. The movement of the second fall prevention portion40from the advanced position P3 to the retracted position P4 and the movement of the swing suppression unit70from the advanced position P5 to the retracted position P6 are performed by the operations in the reverse direction of the above-described series of operations. Next, an operation of the buffer mechanism80will be described. As illustrated inFIG.7A, when the second fall prevention portion40and the swing suppression unit70are located in the retracted position P4 and the retracted position P6, respectively, the second bias portion85is, as illustrated inFIG.6A, not in contact with the first body portion83aand is spaced away by a distance G0. Then, as illustrated inFIG.7B, from the time point when the swing suppression unit70comes into contact with the FOUP90, the compression of the first bias portion84is started. As illustrated inFIG.8A, when the second fall prevention portion40and the swing suppression unit70advance to the advanced position P3 and the advanced position P5, respectively, the second bias portion85is, as illustrated inFIG.6B, not in contact with the first body portion83aand is spaced away by a distance G1. Under this situation, the swing suppression units70come into contact with the side surfaces90bof the FOUP90from front and rear in the X-axis direction and suppress the swing of the FOUP90to the X-axis direction. At this time, the first bias portion84biases the first coupling portion81so that the swing suppression unit70presses the FOUP90and the second bias portion85does not bias the rod-shaped portion82. After moving to the advanced position P5 by the drive of the drive unit51and stopping the drive of the drive unit51, as illustrated inFIG.8B, when the FOUP90swings in the X-axis direction by the swing amount M1 due to the traveling and the like of the overhead transport vehicle1, as illustrated inFIG.6C, the second bias portion85comes into contact with the first body portion83a. At this time also, similarly to the situation ofFIG.6B, the first bias portion84biases the first coupling portion81so that the swing suppression unit70presses the FOUP90and the second bias portion85does not bias the rod-shaped portion82. Furthermore, when the FOUP90swings in the X-axis direction by the swing amount M2 (M2>M1), as illustrated inFIG.6D, the second bias portion85is compressed by the rod-shaped portion82and the first body portion83a. This causes the second bias portion85to act on the rod-shaped portion82and to bias the rod-shaped portion82so that the swing suppression unit70presses the FOUP90. At this time also, the first bias portion84biases the first coupling portion81so that the swing suppression unit70presses the FOUP90. That is, in the buffer mechanism80of the above-described preferred embodiment, when a swing greater than the predetermined swing amount M1 occurs on the FOUP90and the swing suppression unit70is pushed down from the advanced position P5 to the retracted position P6 direction, the rod-shaped portion82is pushed down to the first body portion83aside in the one direction D in accordance with the pushed-down amount. Then, the second bias portion85compressed by the rod-shaped portion82and the first body portion83aacts on the rod-shaped portion82, and biases the rod-shaped portion82so that the swing suppression unit70presses the FOUP90. In other words, the second bias portion85has an elastic force that resists the force of the rod-shaped portion82trying to move to the first body portion83a. In the overhead transport vehicle1of the above-described preferred embodiment, as illustrated inFIG.9, after stopping the drive unit51, when a swing occurs on the FOUP90and the swing suppression unit70moves in the X-axis direction, if the swing amount of the FOUP90is within the predetermined range (swing amount M1), only the first bias portion84biases the swing suppression unit70at the force F1 via the first coupling portion81so as to press the FOUP90. Meanwhile, as illustrated inFIG.10, if the swing of the FOUP90exceeds the predetermined range (swing amount M1), the second bias portion85is compressed and acts on the rod-shaped portion82, and biases the swing suppression unit70with the combined force of the force F1 and the force F2 via the rod-shaped portion82so as to press the FOUP90. As a result, as soon as the swing of the FOUP90exceeds the predetermined range, the biasing force acting on the FOUP90is instantaneously switched from the force F1 to the combined force of the force F1 and the force F2. When the swing amount exceeds the predetermined range, one swing suppression unit70that is pushed out by the FOUP90is biased with relatively large biasing force, and the other swing suppression unit70is biased with relatively small biasing force. As a result, a situation in which the swing of the FOUP90continues due to both swing suppression units70and70pushing each other with the same biasing force is able to be suppressed. That is, as compared with the case of being supported with the same magnitude of biasing force from front and rear in the X-axis direction, the swing of the FOUP90in the X-axis direction is able to be effectively suppressed. Furthermore, in the present preferred embodiment, when suppressing the FOUP90from front and rear in the X-axis direction by the swing suppression units, as compared with the case of simply increasing the biasing force (elastic force), there are following advantages. That is, when the biasing force is increased, because the force at the time of contact with the FOUP90increases and also the force received from the swinging FOUP90increases, there is a need to increase the capacity of the drive unit so as to resist the force thereof. When the capacity of the drive unit increases, the weight of the overhead transport vehicle1increases, and also the size increases. In the overhead transport vehicle1of the present preferred embodiment, because the force at the time of contact with the FOUP90is relatively small and also the force received from the FOUP90is small when the swing amount is within the range of M1, an increase in weight of the overhead transport vehicle1and an increase in size is able to be reduced or prevented. As illustrated inFIG.4andFIG.5, in the overhead transport vehicle1of the above-described preferred embodiment, because the buffer mechanism80is provided in the connecting portion65, the buffer mechanism80can be compactly mounted. Moreover, as illustrated inFIG.5, in the overhead transport vehicle1of the above-described preferred embodiment, when the swing suppression unit70is located in the advanced position P5, the crank portion61and the connecting portion65are located in what is called a dead center (the rotation shaft59, the coupling portion65abetween the crank portion61and the connecting portion65, and the coupling portion65bbetween the connecting portion65and the supporting portion63are located on one straight line in the one direction (the extending direction of the connecting portion65) D), and an external force exerted on the swing suppression unit70after stopping the drive unit51is difficult to be transmitted to the drive unit51. As a result, without using the drive unit51with large output, it is possible to resist the force transmitted to the swing suppression unit70. In the overhead transport vehicle1of the above-described preferred embodiment, because the second fall prevention portion40that prevents the fall of the FOUP90can be driven by utilizing the power of the drive unit51that is a drive source of the swing suppression unit70, as compared with providing a drive unit for each drive mechanism, an overall size of the device is able to be made compact. In the overhead transport vehicle1of the above-described preferred embodiment, because the elastic modulus of the second bias portion85is greater than the elastic modulus of the first bias portion84, the swing of the FOUP90in the X-axis direction is able to be effectively eliminated. Moreover, in the overhead transport vehicle1of the above-described preferred embodiment, because the second bias portion85is made of a rubber material having viscoelasticity, the swing of the FOUP90can be attenuated and the swing of the FOUP90in the X-axis direction can be eliminated more effectively. Preferred embodiments of the present invention have been described above. However, the present invention is not limited to the above-described preferred embodiments, and various modifications are possible within a scope not departing from the spirit of the present invention. In the above-described preferred embodiments, an example in which the first fall prevention portion20, the second fall prevention portion40, and the swing suppression unit70interlock with one another has been described, but each may operate individually, or any two elements may interlock. In the above-described preferred embodiments and modifications, an example in which the second bias portion85is provided in the connecting portion65has been described, but the preferred embodiments are not limited thereto. As illustrated inFIG.11, the second bias portion85may contact a portion of the supporting portion63when, for example, a swing greater than the predetermined swing amount M1 occurs on the FOUP90and when the swing suppression unit70is pushed down from the advanced position P5 to the retracted position P6 direction. In this case, the second bias portion85can be provided in a movable biasing-portion supporting member86and may be advanced to the position capable of contacting a portion of the supporting portion63after moving the swing suppression unit70to the advanced position P5 by the drive of the drive unit51. According to this configuration, the second bias portion85can be arranged, even on a moving path of the swing suppression unit70from the retracted position P6 to the advanced position P5. In the above-described preferred embodiment and modification, an example in which the second bias portion85is, as illustrated inFIG.6B, not in contact with the first body portion83aand is spaced away by the distance G1 when, as illustrated inFIG.8A, the second fall prevention portion40and the swing suppression unit70advance to the advanced position P3 and the advanced position P5, respectively, has been described, but the preferred embodiments are not limited thereto. For example, in a situation of the second fall prevention portion40and the swing suppression unit70illustrated inFIG.8A, the second bias portion85may be in contact with the first body portion83aas illustrated inFIG.6C, the second bias portion85may be in a state of being compressed already as illustrated inFIG.6D, or the second bias portion85and the first body portion83amay be away from each other so that the distance therebetween is longer than the distance G1 illustrated inFIG.7B. By separating by the distance G1 as in the above-described preferred embodiment, the vibration that occurs on the overhead transport vehicle1is able to be prevented from being transmitted to the FOUP90and, as in the above-described modification, the influence of the above-described vibration can be further reduced if the distance is made longer. Furthermore, in a state of the swing suppression unit70illustrated inFIG.8A, if the second bias portion85is configured to be in a state of being compressed already, the FOUP90is able to be suppressed with a stronger force. In the above-described preferred embodiments and modifications, an example in which the first bias portion84is a spring and in which the second bias portion85is a rubber body has been described. However, the rubber body and the spring may be selected from a gel elastic body and the like made of, for example, a silicone resin or the like as appropriate. Furthermore, as for the elastic moduli of the first bias portion84and the second bias portion85, both may be the same, or the first bias portion84may have a greater elastic modulus than the second bias portion85. In the above-described preferred embodiments and modifications, as an example of a transported object that is transported by the overhead transport vehicle1, the FOUP has been described, but it may be a cassette, a magazine, a tray, a container, or the like. In the above-described preferred embodiments and modifications, an example in which the swing suppression unit70includes the two rollers70A and70A has been described, but it may include a block-shaped member made of resin material, for example. In the above-described preferred embodiments and modifications, an example in which the transported object is transported in a suspended state has been described, but it may be of a configuration in which the transported object placed on a placement portion is supported from front and rear in the X-axis direction. In the above-described preferred embodiments and modifications, an example in which one aspect of a preferred embodiment of the present invention is applied to the overhead transport vehicle1that travels on the traveling rail2laid on the ceiling has been described, but it may be applied to a transport vehicle that travels on a rail installed on the floor or may be applied to a trackless transport vehicle. Furthermore, it can also be applied to a stacker crane capable of moving a transfer unit in the horizontal direction along the rail on the floor surface and capable of moving the transfer unit in the vertical direction (raising and lowering) along a mast, a transport device capable of moving in only one of the horizontal direction or the vertical direction, or the like. While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims. | 32,395 |
11858540 | DETAILED DESCRIPTION OF THE INVENTION A locomotive LOC carries all the components of a main system MS which is, for example, a diesel drive. The locomotive LOC is driven with the aid of driving motors DM which are likewise integrated into the locomotive LOC. The locomotive LOC furthermore carries components which are required for its autonomous operation as a diesel locomotive. These are, for example, a train safety system TSS, a control system CS, and drive equipment DE. A railroad car RC coupled to the locomotive LOC carries components of the four ancillary systems. These are:A first ancillary system HS1which takes the form of an AC energy-provision system at a first frequency,A second ancillary system HS2which takes the form of a DC energy-provision system,A third ancillary system FC which provides energy using a fuel cell, andA fourth ancillary system BAT which provides energy using a battery. The first ancillary system HS1takes power from a rail power network with the aid of a current collector CC1. The current collector CC1is, for example, a current collector or pantograph which can be extended and retracted and is arranged on the roof of the railroad car RC. The second ancillary system HS2takes power from a rail power network with the aid of a current collector CC2. The current collector CC2, four of which are present in this case, is, for example, a lateral current collector which can be folded out and withdrawn. The two ancillary systems HS1and HS2jointly use a transformer or reactor TRANS which likewise forms part of the railroad car RC. A controller HBR is moreover provided for the ancillary systems HS1and HS2which is likewise shared and forms a further part of the railroad car RC. The controller HBR here takes the form of an H-bridge for the first ancillary system HS1. The controller HBR takes the form of a boost/buck converter for the second ancillary system HS2. Traction energy or power generated by the four ancillary systems HS1to HS4passes from the railroad car RC to the locomotive LOC via a DC link cable. A further CONTROL cable communicates control signals reciprocally between the railroad car RC and the locomotive LOC. The railroad car RC is connected detachably or permanently coupled to the locomotive LOC. The locomotive LOC can be supplied with DC voltage from the railroad car RC with the aid of the DC link cable. For this purpose, required components such as, for example, a current converter, a transformer, smoothing equipment, etc are arranged on the railroad car RC in order to generate the energy required for the driving. They are alternatively attached to the locomotive LOC. The energy required can also be used to supply an onboard network of the locomotive LOC. | 2,751 |
11858541 | DETAILED DESCRIPTION The subject matter of select exemplary embodiments is described with specificity herein to meet statutory requirements. But the description itself is not intended to necessarily limit the scope of claims. Rather, the claimed subject matter might be embodied in other ways to include different components, steps, or combinations thereof similar to the ones described in this document, in conjunction with other present or future technologies. Terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. The terms “about” or “approximately” or “substantially” as used herein denote deviations from the exact value by +/− 10%, preferably by +/− 5% and/or deviations in the form of changes that are insignificant to the function. With reference now toFIGS.1-4, an articulated rail-transport car10is described in accordance with an exemplary embodiment. The car10may be coupled to one or more similarly configured cars10among a variety of other rail cars and may be moved or propelled along a railway by an independent or separate propulsion unit. The car10is comprised of a plurality of segments that are pivotably coupled end-to-end. The segments may include a leading-end segment12, a trailing-end segment14, a central segment16, and a plurality of interchangeable or intermediate segments18. The leading-end and trailing-end segments12,14are referred to as such for sake of convenience and not to denote any requirement on their orientation or a direction of travel of the car10. As depicted inFIGS.1-3, each of the segments12,14,16,18include a body20formed from an I-beam- or box-beam-styled member in a manner similar to what may be referred to as a spine car or a skeleton car. The body20of each of the segments12,14,16,18may be uniquely configured and/or dimensioned for each of the respective segments12,14,16,18. In another embodiment, the bodies20may be provided in other forms similar to that of a flat-car, box-car, gondola car, or the like. The spine car-type configuration is preferable in some embodiments due to the reduction in weight that such a configuration provides. The bodies20of the leading-end and trailing-end segments12,14are each provided with a coupler22disposed at their respective free ends, i.e. at opposite ends of the car10. The couplers22comprise standard couplers employed in the rail industry for coupling cars, rolling stock, locomotives, or the like such as Janney couplers, Association of American Railroads (AAR) couplers, or the like. The free ends of the leading-end and trailing-end segments12,14are supported on dedicated trucks23or bogies. Opposite ends of the leading-end and trailing-end segments12,14and each end of the central and intermediate segments16,18are each provided with a male or a female adaptor24,26configuration that is adapted to couple to and be supported on a shared truck28or bogie. The dedicated and shared trucks23,28may be configured similarly to a Jacobs bogie in which each of the trucks23,28includes two pairs of wheels30mounted on longitudinally spaced apart axles. The trucks23,28may include braking and suspension means among other components available in the art. The shared trucks28may provide a common pivot assembly32to which adjacent segments12,14,16,18are connected which allows both segments12,14,16,18to pivot laterally relative to one another and relative to the shared truck28as the car10traverses curves in the railway. The pivot assembly32may also allow the adjacent segments12,14,16,18to pivot at least partially side-to-side and fore and aft relative to the shared truck28. The pivot assembly32however provides a slackless coupling, i.e. one that substantially maintains a spacing between adjacent segments12,14,16,18such that a longitudinal distance between the segments12,14,16,18is maintained or does not substantially change as the car10is placed under longitudinal compressive or tension forces, e.g. when the car10is pulled or pushed. The overall length of the car10thus remains substantially constant during operation. In contrast, known rail-transport systems employ standard couplings which can have up to six inches or more of coupler slack between each of the cars. Such slack is compounded by the large number of cars and can result in several feet of longitudinal movement of ends of the ribbon rails relative to rail stands at the ends of the rail-transport train. As shown inFIG.3A, the male and female adaptors24,26are each provided with a forked configuration with a pair of longitudinally extending and transversely spaced apart arms34. The arms34fof the female adaptor26are spaced transversely apart a greater distance than the arms34mof the male adaptor24such that when coupled to the shared truck28, the arms34mof the male adaptor24are at least partially disposed between the arms34fof the female adaptor26. Although a particular configuration of the male and female adaptors24,26and their coupling with the shared truck28is shown and described herein, such is not intended to limit exemplary embodiments. Other adaptor configurations and couplings with the shared truck28may be employed without departing from the scope of embodiments described herein. A plurality of rail stands36are disposed on the car10spaced longitudinally apart along the length thereof. The stands36may take a variety of configurations to accommodate a particular number, gage, weight, or style of ribbon rails to be carried thereon, however each of the rail stands36is preferably configured to support each ribbon rail disposed on the car10. In one embodiment, depicted inFIG.4, each stand36includes a pair of upright members or posts38spaced transversely apart with a plurality of vertically stacked shelves40or tiers extending therebetween. Each shelf40provides a number of rollers42rotatably mounted end-to-end across the length of the shelf40and configured to rotate about an axis extending parallel to the length of the shelf40and transversely relative to the car10. Each roller42is sized to receive a base flange or foot of a respective ribbon rail and may include flanges projecting radially outward from ends of the roller42to hold the respective ribbon rail in alignment with the roller42. Each roller42thus forms a pocket in which the ribbon rail may be disposed. In other embodiments, more than one roller42may be employed to support each ribbon rail and flanges may be provided on the shelf40instead of or in addition to flanges on the roller42among a variety of other configurations. In the embodiment shown inFIG.4, each rail stand36includes five shelves40with ten rollers42disposed thereon to support up to fifty ribbon rails at a time. However, it is to be understood that other numbers of shelves40and/or rollers42thereon may be employed without departing from the scope of embodiments described herein. The longitudinal spacing between the rail stands36is sufficient to enable adequate flexure and bending of the ribbon rails as the car10navigates curves in the railway while also preventing excessive droop in a leading end of the ribbon rail as it is loaded onto the rail stands36. Generally, the spacing between the rail stands36is preferably not less than about 75 feet and is preferably about 27-29 feet or around about 28 feet. Spacing greater than about 75 feet or greater than about 30 feet may allow the ribbon rail to bow outwardly and flex as the segments12,14,16,18of the car10pivot relative to one another when on a curve. Spacing less than about 75 feet may overly restrict such bending or bowing which may cause the ribbon rails to leave their respective pockets, damage the rail stands36, and/or apply unwanted forces on the car10. A maximum spacing between the rail stands36is preferably not greater than about 30 feet. As the ribbon rail is loaded onto the car10, a leading end thereof is extended unsupported from one rail stand36to the next. Too great a spacing between the rail stands36may allow the leading end to droop or sag vertically downward too great a distance causing the ribbon rail to collide with the rail stand36or shelves40thereof or to miss a desired shelf40entirely rather than landing on the desired roller42. Accordingly, in a preferred embodiment, the rail stands36are spaced apart between about 75 feet and about 30 feet or more preferably between about 28 feet and about 30 feet. It is to be understood, that different gages and/or types of rail may have different bending properties or characteristics and that spacing between the rail stands36may be tailored according to such characteristics without departing from the scope of embodiments described herein. As depicted inFIG.5, the rail stands36located nearest the leading end and the trailing end of the car10(rail stand36A and rail stand36I) may also be spaced apart from the respective ends of the car10to maintain desired minimum and maximum spacing between the rail stands36when the car10is coupled to another similarly configured car10or to another car, such as a tie-down car44or a tunnel car46,48, among others, that also includes rail stands36or other means for supporting a ribbon rail that extends between the respective cars. With continued reference toFIGS.1-3, each of the segments12,14,16,18of the car10are provided with a unique configuration. In the embodiment shown inFIGS.1-3, each of the segments12,14,16,18include a different longitudinal length, and distribution of the rail stands36thereon. Also as described previously, adjacent ends of the segments12,14,16,18are each provided with either a male or female configuration24,26. For example, the leading-end segment12is the longest segment, followed by the trailing-end segment14, the intermediate segments18, and then the central segment16. Further, the leading-end segment12includes a pair of rail stands36. The rail stand36A nearest the leading end of the segment12is disposed to directly overlie the dedicated truck23A while a second rail stand36B is disposed along the length of the segment12between the dedicated truck23A and the shared truck28A. The trailing-end segment14is similarly configured with one rail stand36I nearest the trailing end of the segment14overlying the dedicated truck23B and a second rail stand36H disposed along the length of the segment14between the dedicated truck23B and the respective shared truck28D. Both the leading-end and the trailing-end segments12,14are provided with a male adaptor24for coupling with their respective shared trucks28A and28D, respectively. Two intermediate segments18A and18B are depicted in the car10however any number of intermediate segments18may be employed in exemplary embodiments. The intermediate segments18A and18B each include two rail stands36that are shifted longitudinally toward one end or asymmetrically disposed along the length of the respective intermediate segments18A and18B between the respective shared trucks28(segment18A includes rail stands36C and36D disposed between shared trucks28A and28B and segment18B includes rail stands36F and36G disposed between shared trucks28C and28D). The central segment16is generally symmetrically configured with a single rail stand36E centered along the longitudinal length between the shared trucks28B and28C supporting each end thereof. Opposing ends of the central segment16are each provided with a female configuration26for coupling to the respective shared trucks28B,28C. As such, the intermediate segments18A and18B are oppositely oriented on each side of the central section16so as to couple to the shared trucks28B and28C via their ends having the male adaptors24. Ends of the segments18A and18B having the female adaptors26are thus provided for coupling to the shared trucks28A and28D along with the male adaptors24of the leading-end segment12and the trailing-end segment14. It is to be understood, that the male and female adaptors24,26of any of the segments12,14,16,18may be reversed without departing from the scope of embodiments described herein. The ability of the intermediate segments18to be disposable to either side of the central segment16by simply reversing the orientation of the intermediate segment18reduces manufacturing and maintenance complexities. Additionally, this configuration increases the adaptability of the car10to varied applications by enabling additional intermediate segments18to be easily and simply disposed between one or both of the intermediate segments18A,18B and the respective leading-end segment12or trailing-end segment14to increase or decrease the length of the car10. The length of the car10may be further adapted or decreased by removing one or both of the intermediated segments18and directly coupling the central segment18with one or both of the leading-end segment12or the trailing-end segment14via the shared trucks28. The location and distribution of the rail stands36along the longitudinal length of the car10and between the couplers22is independent of the location of the shared and dedicated trucks28,23and/or is asymmetrical relative thereto. Further, the spacing between adjacent ones of the rail stands36may vary but preferably remains within the desired minimum and maximum described previously. For example, spacing between the rail stands36A and36I at the ends of the car10and the respective next adjacent rail stands36B and36H may be about 29 feet while spacing between each of the other rail stands36B-36H may be about 28.583 feet. The number of rail stands36on the car10is greater than the total number of trucks (dedicated trucks23and shared trucks28), i.e. the ratio of the number of rail stands36to the number of trucks23,28is greater than 1:1. In one embodiment, a ratio of the number of rail stands36to the total number of trucks23,28is equal to or greater than 3:2. For example as depicted inFIGS.1-3, the car10includes nine rail stands36and six trucks23,28disposed between the couplers22at each end of the car10. In other embodiments, the ratio of rail stands36to trucks23,28may be 2:1, 3:1, 4:1, 4:3, 5:1, 5:2, 5:3, 6:5, 7:2, 7:3, 7:4, 7:5, 7:6, 8:3, 8:5, 8:7, or another ratio greater than 1:1. The distribution of the rail stands36relative to the shared and dedicated trucks28,23may provide an uneven distribution of the weight of the ribbon rails on the trucks28,23. In some embodiments, the shared trucks28supporting the central segment16carry a greater weight than the dedicated trucks23and the shared trucks28supporting the leading-end segment12and the trailing-end segment14. For example, the dedicated trucks23might carry about 124,000 pounds each when fully loaded, while the shared trucks28A and28D might carry about 135,000 pounds, and the shared trucks28B and28C might carry about 142,400 pounds. With reference now toFIG.5, a plurality of the articulated rail-transport cars10may be incorporated into a rail-transport train50to transport ribbon rails having a length greater than the longitudinal length each of the cars10individually. The rail train50may also include a tie-down car44, a tunnel car46at the leading end thereof, and a tunnel car48at the trailing end thereof. As depicted inFIG.5, the rail train50includes six rail-transport cars10. The tie-down car44is positioned in the middle of the six rail-transport cars10, i.e. between the third and fourth of the rail-transport cars10. In one embodiment, the orientation of the rail-transport cars10is reversed on each side of the tie-down car44as depicted inFIG.5, however other configurations may be employed. Each of the tunnel cars46,48, the rail-transport cars10, and the tie-down car44are coupled together via couplers like the couplers22and may be coupled at either end of the train50to another train50, a power unit or other propulsion means, and/or to one or more other rail-based cars. The tie-down car44may employ known configurations and includes a plurality of clamping units, at least one for each ribbon rail carried by the train50. In one embodiment, the tie-down car44is an automated tie-down car or includes automated clamping units that are controllable by an operator at the tie-down car44, at an operator's station elsewhere on the rail-transport train, or remotely. The clamping units fix the ribbon rail against longitudinal movement relative to the tie-down car44to retain the ribbon rail in position during transport. The tunnel cars46,48may also employ known configurations and, as such, may include means for aiding loading and unloading the ribbon rails onto the train50and for preventing the ribbon rails from inadvertently traveling longitudinally along the train50if the associated clamping units fail or are damaged. As depicted inFIG.5, ribbon rails having a length of, for example up to 7,600 feet may be transported by the train50. The train50may be otherwise configured with greater or fewer numbers of rail-transport cars10as needed to accommodate longer or shorter lengths of ribbon rails. To load the ribbon rail, the ribbon rail may be fed into a pocket of a first rail stand36on a first of the rail-transport cars10. The ribbon rail is driven by means carried on one of the tunnel cars46,48or on another loading apparatus to extend onto a next adjacent rail stand36and then onto each subsequent rail stand36on the first rail-transport car10. The ribbon rail is further driven to extend to the rail stands36on each of the subsequent rail-transport cars10until fully loaded onto the train50. A respective clamping unit on the tie-down car44is actuated to fix the ribbon rail into position. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments of the technology have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Identification of structures as being configured to perform a particular function in this disclosure and in the claims below is intended to be inclusive of structures and arrangements or designs thereof that are within the scope of this disclosure and readily identifiable by one of skill in the art and that can perform the particular function in a similar way. Certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations and are contemplated within the scope of the claims. | 18,691 |
11858542 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS In the following description, like numbers refer to like elements. FIGS.1and2each depict a representative example of an embodiment of an EOT unit100attached to a train car102. EOT unit100, which is an integrated assembly that is configured to be attached to a rear-facing end104of a train's last car102. In the illustrated example, car102is a boxcar. However, a boxcar is a non-limiting, representative example of a train car to which an EOT unit may be attached. The car102is supported on a track106by an undercarriage108. The track is comprised of at least two rails, rail106aand rail106b. The EOT unit100comprises at least two primary subassemblies. One primary subassembly is comprised of the unit's electronic and electrical components and sensors, which are housed in a protective enclosure110. Representative examples of such components are a power supply, which includes a power storage device such as one or more rechargeable batteries, and any one or more of the following: sensors for monitoring conditions of subsystems for a train, such as its braking system; a global positioning satellite (GPS) receiver for determining the location of the end of the train and the EOT unit; lights built into the enclosure and controlled by internal circuits for visually indicating the end of the train; one or more two-way radios for communication with the HOT and, optionally, over a wireless train control network such as the ITCnet® network operated by Meteorcomm, LLC; and hardware for performing control, communication, and data processing processes, such as field programmable gate arrays (FPGA), microcontrollers, and/or general purpose processors or computers that executed stored instructions or software. Each EOT unit100further comprises an electric generation subassembly. Two embodiments of the electric generation subassembly are shown, one onFIG.1and one inFIG.2. Each is a non-limiting, representative example of an electric generation subassembly. Electric generation subassembly112inFIG.1will be referred to as a “balanced” electric generation subassembly because it rides on both rails106aand106b. Electric generation subassembly114inFIG.2engages either rail106aor106b, but not both. Each electric generation subassembly112and114is comprised of a support arm116that removably couples, directly or indirectly, the subassembly with the car102. Each electric generation subassembly further comprises at least one rotational member118in contact with one of the rails106aor106b. At least one rotational member118is coupled with at least one electrical generator120through a drive system (not shown). The electrical generator120could, instead, be an alternator. The term “generator” is intended to refer to any device capable of generating from a rotational input direct or alternating electrical current. The support arm116is representative of a structure that mechanically couples the electric generation subassembly112or114to the rear end104of the car102and supports the subassembly in a position in which each rotational member118engages and rolls along or is otherwise rotated by a rail of track106when the car102moves on the track. Each rotational member118is mounted on a transverse support member122for rotation about a horizontal axis. The EOT unit100is an assembly configured for repeated attachment and detachment to the rear end of a car so that it can be switched to whatever car is the last car in the train or stored for later use. For example, a mounting system for an EOT unit may include an adapter that enables it to be attached or clamped to a train car coupler109that is present on all train cars and used to connect them together to form the train. Alternatively, the mounting system of the EOT unit may include an adapter or bracket that is designed to mount to another component of the car, such as its frame (not shown) or to a bracket or coupling that has been fitted or attached to the rear-facing end104of the car102. The mounting system may allow each subassembly to be connected separately to car102. The mounting system may, alternatively, connect the enclosure110to the car and the electric generation subassembly112or114to the enclosure. In another alternative embodiment, the mounting system may connect the support arm116to the car, with the enclosure110mounted to or supported by the electric generation subassembly. For example, the mounting system may allow the support arm116to be connected to a train coupler109, the enclosure110to be connected to the coupler, or both to be connected As the train, and therefore, car102moves forward along track106, friction between rotational members118and rails106aor106bcauses the rotational member118to rotate. Each rotational member is mounted to the subassembly in a manner that allows it to rotate. To ensure that the area of contact between at least one rotational member118on electric generation subassembly112and the rotational member118on subassembly114is sufficient to cause the rotational member to be rotated on the subassembly by engagement with a rail106aor160band, in turn, rotate the input of the electrical generator120, the rotational member may have a configuration (a shape and size) that complements the cross-sectional shape of the rail106aor106band acts to maintain its position on the rail. The rotational member may also have at least some of its surfaces that contact the rail (particularly surfaces that contact a top surface of a rail), if not the entire rotational member, made from or comprise a material that has, as compared to being made from the same material of which the rail is made, a higher coefficient of friction and/or that include surface features that promote traction of the rotational member on the rail. For example, a rotational member may be made partially or entirely of rubber or composite material. One or more of the rotational members118may, optionally, be configured in a way that assists with maintaining contact with the rail, such as with a means for retaining the rotational member on the rail. Each of the rotational members118that is shown inFIGS.1and2comprises an inner flange118aand a hub118bthat engages a top surface of one of the rails106aand106b. In the balanced embodiment ofFIG.1, the transverse support member122extends the full width of track106and thus maintains a fixed distance between inner flanges118a, the inner flanges118acooperating to keep the electric generation subassembly centered on track106, which in turn ensures that hub118bof the rotational members118remain in contact with the top surface of each of the rails106aand106b. The single rotational member118inFIG.2further comprises an outer flange118cthat connects to hub118b. The inner flange118aand outer flange118ccooperate to retain the rotational member118on rails106aor106bof track106. This embodiment of a rotational member may, optionally, be used with the electric generation subassembly112ofFIG.1. The support arm116in each embodiment of the electric generation subassembly112and114may be comprised of multiple elements. Furthermore, the support arm may, optionally, be adjustable to allow it to position the rotational member adjacent to be in contact with a rail after it has been attached to car102and/or to orient the axis of rotation of the rotational member with respect to the rail so that it is rotated by the rail when the train is moving. For example, the support structure may comprise one or more linkages comprised of links with joints that pivot or rotate and/or translate to allow for adjustment. Furthermore, the supporting arm116for each electric generation subassembly112and114may, optionally, incorporate a suspension system that accommodates limited amounts of deflection or displacement of one or more of the rotation members118with respect to where the subassembly is connected with car102or EOT unit. In response to a displacement or deflection, the suspension will generate a return force. The suspension system may comprise one or more springs and dampers. In one example, the support arm116of either or both electric generation subassemblies may be coupled to car102and/or to a transverse structural support member122in a manner that allows it to pivot up and down (or rotate about the horizontal axis) and/or in a manner that allows it to pivot, rotate or swing about a vertical axis. It may be allowed to swing freely up or down. Alternatively, a damper may optionally be included to slow its motion; and a spring may optionally be included to resist movement and supply a return force. Furthermore, the arm may, optionally, be configured with a spring that is loaded to generate a force between a point in a fixed relationship with the car102and the rotational member118when it is engaging one of the rails106aor106b, and thereby resulting in force applied to the rotational member that pushes it against the rail. If coupled to the car in a manner to allow it to pivot up and down, it may, optionally, be raised into a position in which it does not contact rails. It may also be raised to a fully stowed position to reduce the overall size of the EOT unit. The support arm may, optionally, also be configured to load the rotational member118when it engages a rail of a track. Loading of a rotational member118to generate a force normal to the top surface of a rail can be accomplished by allowing some or all the mass of some or all of the electric generation subassembly to rest on the rails, such as by mounting to the car102that allows for the subassembly to shift up and down with respect to the car102. Alternatively, or in addition, one or more springs coupled into the support arm116between a rotational member118and the car102could be loaded (for example, compressed or extended to generate a force) when the electric generation subassembly is mounted to the car102, and the rotational member is placed in a neutral operating position on one of the rails106aor106b. The loading could generate a force that pushes the rotational member downwardly to engage the top surface of the rail and/or laterally against the side of the rail. In electric generation subassembly114ofFIG.2, lateral loading of the rotational member could be used to retain the position of the rotational member on the rail as an alternative to a rotational member with a disk-shaped outer flange portion like the outer flange118c. At least one of the two rotational embers of electric generation subassembly112(FIG.1) and the rotational member of the generation subassembly114(FIG.2) is coupled to an input shaft of electrical generator120by a drive assembly. Turning a rotational member118turns or drives the input shaft of the electrical generator120. For example, one or both rotational members118of electric generation subassembly112(FIG.1) could be attached to a drive axle, which rotates inside of an axle housing that comprises part of or the entire transverse support member122. The drive axle is then coupled with an input shaft of the electrical generator120by a transmission comprising meshing gears, belts, and/or chains. Similarly, the single rotational member118of electric generation subassembly114(FIG.2) could be attached to a drive axle disposed for rotation within a transverse portion124of the support arm116, which then drives the input shaft of the electrical generator120. Alternatively, the rotational members could rotate on a spindle at the end of the transverse support member122or transversion portion124, and the input of the electrical generator input shaft is coupled directly to one of the rotational members118through gears, belts, and/or chains. In any of the embodiments described herein, the electrical current generated by generator120and supplied to EOT enclosure110by an electrical cable (not shown) running from the electrical generator120to a power supply within EOT enclosure110. The cable may be an external cable or a cable that runs within the structure of the support arm116. Electrical current from the electrical generator is, for example, used by the power supply to charge a rechargeable battery and to power directly the electrical components. If electrical power is not required, any of the embodiments described herein may, optionally, include a transmission that disconnects the electrical generator from the rotational members118. The amount of energy generated over a given distance of travel of a train is, in one embodiment, equal to or greater than the amount of energy required by or consumed by operation of the electronic components of EOT device over that distance. In another embodiment, the electrical generator is capable of generating at least 100 watts of power. In another embodiment, the electrical generator is capable of generating at least 100 watts of power when the train is moving at an average speed for a trip; or, alternatively, when the train is moving at least 40 MPH; or, alternatively, when the train is moving at least at 20 MPH; or, alternatively, when the train is moving at 10 MPH. The foregoing description is of exemplary and preferred embodiments. The invention, as defined by the appended claims, is not limited to the described embodiments. The embodiments are, unless otherwise noted, non-limiting examples of one or more inventive features. Alterations and modifications to the disclosed embodiments may be made without departing from the invention. The meaning of the terms used in this specification are, unless stated otherwise, intended to have their ordinary and customary meaning to those in the art and are not intended to be limited to specific implementations that may be described. | 13,691 |
11858543 | DETAILED DESCRIPTION FIG.1is a schematic diagram of one embodiment of a control system100for operating a train102traveling along a track106. The train may include multiple rail cars (including powered and/or non-powered rail cars or units) linked together as one or more consists or a single rail car (a powered or non-powered rail car or unit). The control system100may provide for cost savings, improved safety, increased reliability, operational flexibility, and convenience in the control of the train102through communication of network data between an off-board remote controller interface104and the train102. The control system100may also provide a means for autonomous systems, remote operators, or third party operators to communicate with the various locomotives or other powered units of the train102from remote interfaces that may include any computing device connected to the Internet or other wide area or local communications network, and automatically maintain coordination of controls and/or synchronization between a lead locomotive and one or more trailing locomotives even when there is a degradation in communication between the locomotives. The control system100may be used to convey a variety of network data and command and control signals in the form of messages communicated to the train102, such as packetized data or information that is communicated in data packets, from the off-board remote controller interface104. The off-board remote controller interface104may also be configured to receive remote alerts and other data from a controller on-board the train, and forward those alerts and data to desired parties via pagers, mobile telephone, email, and online screen alerts. The data communicated between the train102and the off-board remote controller interface104may include signals indicative of various operational parameters associated with components and subsystems of the train, signals indicative of fault conditions, signals indicative of maintenance activities or procedures, and command and control signals operative to change the state of various circuit breakers, throttles, brake controls, actuators, switches, handles, relays, and other electronically-controllable devices on-board any locomotive or other powered unit of the train102. The remote controller interface104also enables the distribution of the various computer systems such as control systems and subsystems involved in operation of the train or monitoring of train operational characteristics at one or more remote locations off-board the train and accessible by authorized personnel over the Internet, wireless telecommunication networks, and by other means. In various exemplary embodiments, a centralized or cloud-based computer processing system may be located in one or more of a back-office server or a plurality of servers remote from the train. One or more distributed, edge-based computer processing systems may be located on-board one or more locomotives of the train, and each of the distributed computer processing systems may be communicatively connected to the centralized computer processing system. Control system100may be configured to use artificial intelligence for maintaining coordination of controls and/or synchronization between centralized (cloud-based) and distributed (edge-based) train control models, and between a lead locomotive and a remote or trailing locomotive operating in a distributed power mode. Control system100may include a centralized or cloud-based computer processing system located in one or more of a back-office server or a plurality of servers remote from train102, one or more distributed, edge-based computer processing systems located on-board one or more locomotives of the train, wherein each of the distributed computer processing systems is communicatively connected to the centralized computer processing system, and a data acquisition hub communicatively connected to one or more of databases and a plurality of sensors associated with the one or more locomotives or other components of the train and configured to acquire real-time and historical configuration, structural, and operational data in association with inputs derived from real time and historical contextual data relating to a plurality of trains operating under a variety of different conditions for use as training data. Control system100may also include an energy management machine learning modeling engine or a centralized virtual system modeling engine included in the centralized computer processing system and configured to create one or more centralized models of one or more actual train control systems in operation on-board the one of more locomotives of the train based at least in part on data received from the data acquisition hub, wherein a first one of the centralized models is utilized in a process of generating a first set of output control commands for a first train control scenario implemented by an energy management system associated with one or more of the locomotives, and one or more distributed virtual system modeling engines included in one or more of the distributed computer processing systems, each of the one or more distributed virtual system modeling engines being configured to create one or more edge-based models of one or more actual train control systems in operation on-board the one or more locomotives of the train based at least in part on data received from the data acquisition hub, wherein a first one of the edge-based models is utilized in a process of generating a second set of output control commands for a second train control scenario implemented by the energy management system associated with the one or more of the locomotives. The energy management machine learning modeling engine may be included in at least one of the centralized and distributed computer processing systems, the machine learning engine being configured to receive the training data from the data acquisition hub, receive the first centralized model from the centralized virtual system modeling engine, receive the first edge-based model from one of the distributed virtual system modeling engines, and compare the first set of output control commands generated by the first centralized model for the first train control scenario and the second set of output control commands generated by the first edge-based model for the second train control scenario. The machine learning engine may train a learning system using the training data to enable the machine learning engine to safely mitigate a divergence discovered between the first and second sets of output control commands using a learning function including at least one learning parameter. The machine learning engine may be configured to maintain coordination of controls between the lead locomotive and the one or more trailing locomotives through a comparison of shared parameters that include one or more of a divergence between an operating model for the lead locomotive and an operating model for one or more of the trailing locomotives generated by the energy management machine learning modeling engine, synchronization keys for the lead locomotive and the one or more trailing locomotives, and a universal time constant (UTC) used for train protection systems such as Positive Train Control (PTC). Training the learning system may include providing the training data as an input to the learning function, the learning function being configured to use the at least one learning parameter to generate an output based on the input, causing the learning function to generate the output based on the input, comparing the output to one or more of the first and second sets of output control commands to determine a difference between the output and the one or more of the first and second sets of output control commands, and modifying the at least one learning parameter and the output of the learning function to decrease the difference responsive to the difference being greater than a threshold difference and based at least in part on actual real time and historical information on in-train forces and train operational characteristics acquired from a plurality of trains operating under a variety of different conditions. An energy management system associated with the one or more locomotives of the train may be configured to adjust one or more of throttle requests, dynamic braking requests, and pneumatic braking requests for the one or more locomotives of the train based at least in part on the modified output of the learning function used by the learning system which has been trained by the machine learning engine Some control strategies undertaken by control system100may include asset protection provisions, whereby asset operations are automatically derated or otherwise reduced in order to protect train assets, such as a locomotive, from entering an overrun condition and sustaining damage. For example, when the control system detects via sensors that the coolant temperature, oil temperature, crankcase pressure, or another operating parameter associated with a locomotive has exceeded a threshold, the control system may be configured to automatically reduce engine power (e.g., via a throttle control) to allow the locomotive to continue the current mission with a reduced probability of failure. In addition to derating or otherwise reducing certain asset operations based on threshold levels of operational parameters, asset protection may also include reducing or stopping certain operations based on the number, frequency, or timing of maintenance operations or faults detected by various sensors. In some cases, the control system may be configured to fully derate the propulsion systems of the locomotive and/or bring the train102to a complete stop to prevent damage to the propulsion systems in response to signals generated by sensors. In this way, the control system may automatically exercise asset protection provisions of its control strategy to reduce incidents of debilitating failure and the costs of associated repairs. At times, however, external factors may dictate that the train102should continue to operate without an automatic reduction in engine power, or without bringing the train to a complete stop. The costs associated with failing to complete a mission on time can outweigh the costs of repairing one or more components, equipment, subsystems, or systems of a locomotive. In one example, a locomotive of the train may be located near or within a geo-fence characterized by a track grade or other track conditions that require the train102to maintain a certain speed and momentum in order to avoid excessive wheel slippage on the locomotive, or even stoppage of the train on the grade. Factors such as the track grade, environmental factors, and power generating capabilities of one or more locomotives approaching or entering the pre-determined geo-fence may result in an unacceptable delay if the train were to slow down or stop. In certain situations the train may not even be able to continue forward if enough momentum is lost, resulting in considerable delays and expense while additional locomotives are moved to the area to get the train started again. In some implementations of this disclosure the geo-fences may be characterized as no-stop zones, unfavorable-stop zones, or favorable-stop zones. In situations when a train is approaching a geo-fence characterized as one of the above-mentioned zones, managers of the train102may wish to temporarily modify or disable asset protection provisions associated with automatic control of the locomotive to allow the train102to complete its mission on time. However, managers having the responsibility or authority to make operational decisions with such potentially costly implications may be off-board the train102or away from a remote controller interface, such as at a back office or other network access point. To avoid unnecessary delays in reaching a decision to temporarily modify or disable asset protection provisions of automatic train operation (ATO), the control system100may be configured to facilitate the selection of ride-through control levels via a user interface at an on-board controller or at the off-board remote controller interface104. The control system100may also be configured to generate a ride-through control command signal including information that may be used to direct the locomotive to a geo-fence with a more favorable stop zone The off-board remote controller interface104may be connected with an antenna module124configured as a wireless transmitter or transceiver to wirelessly transmit data messages and control commands to the train102. The messages and commands may originate elsewhere, such as in a rail-yard back office system, one or more remotely located servers (such as in the “cloud”), a third party server, a computer disposed in a rail-yard tower, and the like, and be communicated to the off-board remote controller interface104by wired and/or wireless connections. Alternatively, the off-board remote controller interface104may be a satellite that transmits the messages and commands down to the train102or a cellular tower disposed remote from the train102and the track106. Other devices may be used as the off-board remote controller interface104to wirelessly transmit the messages. For example, other wayside equipment, base stations, or back office servers may be used as the off-board remote controller interface104. By way of example only, the off-board remote controller interface104may use one or more of the Transmission Control Protocol (TCP), Internet Protocol (IP), TCP/IP, User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP) to communicate network data over the Internet with the train102. As described below, the network data can include information used to automatically and/or remotely control operations of the train102or subsystems of the train, and/or reference information stored and used by the train102during operation of the train102. The network data communicated to the off-board remote controller interface104from the train102may also provide alerts and other operational information that allows for remote monitoring, diagnostics, asset management, and tracking of the state of health of all of the primary power systems and auxiliary subsystems such as HVAC, air brakes, lights, event recorders, and the like. The increased use of distributed computer system processing enabled by advances in network communications, including but not limited to 5G wireless telecommunication networks, allows for the remote location of distributed computer system processors that may perform intensive calculations and/or access large amounts of real-time and historical data related to the train operational parameters. This distributed computer system processing may also introduce potential breakdowns in communication or transient latency issues between the distributed nodes of the communication network, leading to potential synchronization and calibration problems between various computer control systems and subsystems. The control system100and/or offboard remote control interface104, according to various embodiments of this disclosure, may employ artificial intelligence algorithms and/or machine learning engines or processing modules to train learning algorithms and/or create virtual system models and perform comparisons between real-time data, historical data, and/or predicted data, to find indicators or patterns in which the distributed computer systems may face synchronization problems. The early identification of any potential synchronization or calibration problems between the various distributed computer systems or subsystems, or between a lead locomotive and one or more trailing locomotives when operating in a degraded communications environment using machine learning and virtual system models enables early implementation of proactive measures to mitigate the problems. The train102may include a lead consist114of powered locomotives, including the interconnected powered units108and110, one or more remote or trailing consists140of powered locomotives, including powered units148,150, and additional non-powered units112,152. “Powered units” refers to rail cars that are capable of self-propulsion, such as locomotives. “Non-powered units” refers to rail cars that are incapable of self-propulsion, but which may otherwise receive electric power for other services. For example, freight cars, passenger cars, and other types of rail cars that do not propel themselves may be “non-powered units”, even though the cars may receive electric power for cooling, heating, communications, lighting, and other auxiliary functions. In the illustrated embodiment ofFIG.1, the powered units108,110represent locomotives joined with each other in the lead consist114. The lead consist114represents a group of two or more locomotives in the train102that are mechanically coupled or linked together to travel along a route. The lead consist114may be a subset of the train102such that the lead consist114is included in the train102along with additional trailing consists of locomotives, such as trailing consist140, and additional non-powered units152, such as freight cars or passenger cars. While the train102inFIG.1is shown with a lead consist114, and a trailing consist140, alternatively the train102may include other numbers of locomotive consists joined together or interconnected by one or more intermediate powered or non-powered units that do not form part of the lead and trailing locomotive consists. The powered units108,110of the lead consist114include a lead powered unit108, such as a lead locomotive, and one or more trailing powered units110, such as trailing locomotives. As used herein, the terms “lead” and “trailing” are designations of different powered units, and do not necessarily reflect positioning of the powered units108,110in the train102or the lead consist114. For example, a lead powered unit may be disposed between two trailing powered units. Alternatively, the term “lead” may refer to the first powered unit in the train102, the first powered unit in the lead consist114, and the first powered unit in the trailing consist140. The term “trailing” powered units may refer to powered units positioned after a lead powered unit. In another embodiment, the term “lead” refers to a powered unit that is designated for primary control of the lead consist114and/or the trailing consist140, and “trailing” refers to powered units that are under at least partial control of a lead powered unit. The powered units108,110include a connection at each end of the powered unit108,110to couple propulsion subsystems116of the powered units108,110such that the powered units108,110in the lead consist114function together as a single tractive unit. The propulsion subsystems116may include electric and/or mechanical devices and components, such as diesel engines, electric generators, and traction motors, used to provide tractive effort that propels the powered units108,110and braking effort that slows the powered units108,110. Similar to the lead consist114, the embodiment shown inFIG.1also includes the trailing consist140, including a lead powered unit148and a trailing powered unit150. The trailing consist140may be located at a rear end of the train102, or at some intermediate point along the train102. Non-powered units112may separate the lead consist114from the trailing consist140, and additional non-powered units152may be pulled behind the trailing consist140. The propulsion subsystems116of the powered units108,110in the lead consist114may be connected and communicatively coupled with each other by a network connection118. In one embodiment, the network connection118includes a net port and jumper cable that extends along the train102and between the powered units108,110. The network connection118may be a cable that includes twenty seven pins on each end that is referred to as a multiple unit cable, or MU cable. Alternatively, a different wire, cable, or bus, or other communication medium, may be used as the network connection118. For example, the network connection118may represent an Electrically Controlled Pneumatic Brake line (ECPB), a fiber optic cable, or wireless connection—such as over a 5G telecommunication network. Similarly, the propulsion subsystems156of the powered units148,150in the trailing consist140may be connected and communicatively coupled to each other by the network connection118, such as a MU cable extending between the powered units148,150, or wireless connections. The network connection118may include several channels over which network data is communicated. Each channel may represent a different pathway for the network data to be communicated. For example, different channels may be associated with different wires or busses of a multi-wire or multi-bus cable. Alternatively, the different channels may represent different frequencies or ranges of frequencies over which the network data is transmitted. The powered units108,110may include communication units120,126configured to communicate information used in the control operations of various components and subsystems, such as the propulsion subsystems116of the powered units108,110. The communication unit120disposed in the lead powered unit108may be referred to as a lead communication unit. The lead communication unit120may be the unit that initiates the transmission of data packets forming a message to the off-board, remote controller interface104. For example, the lead communication unit120may transmit a message via a WiFi or cellular modem to the off-board remote controller interface104. The message may contain information on an operational state of the lead powered unit108, such as a throttle setting, a brake setting, readiness for dynamic braking, the tripping of a circuit breaker on-board the lead powered unit, or other operational characteristics. Additional operational information associated with a locomotive such as an amount of wheel slippage, wheel temperatures, wheel bearing temperatures, brake temperatures, and dragging equipment detection may also be communicated from sensors on-board a locomotive or other train asset, or from various sensors located in wayside equipment or sleeper ties positioned at intervals along the train track. The communication units126may be disposed in different trailing powered units110and may be referred to as trailing communication units. Alternatively, one or more of the communication units120,126may be disposed outside of the corresponding powered units108,110, such as in a nearby or adjacent non-powered unit112. Another lead communication unit160may be disposed in the lead powered unit148of the trailing consist140. The lead communication unit160of the trailing consist140may be a unit that receives data packets forming a message transmitted by the off-board, remote controller interface104. For example, the lead communication unit160of the trailing consist140may receive a message from the off-board remote controller interface104providing operational commands that are based upon the information transmitted to the off-board remote controller interface104via the lead communication unit120of the lead powered unit108of the lead consist114. A trailing communication unit166may be disposed in a trailing powered unit150of the trailing consist140, and interconnected with the lead communication unit160via the network connection118. The communication units120,126in the lead consist114, and the communication units160,166in the trailing consist140may be connected with the network connection118such that all of the communication units for each consist are communicatively coupled with each other by the network connection118and linked together in a computer network. Alternatively, the communication units may be linked by another wire, cable, or bus, or be linked by one or more wireless connections. The networked communication units120,126,160,166may include antenna modules122. The antenna modules122may represent separate individual antenna modules or sets of antenna modules disposed at different locations along the train102. For example, an antenna module122may represent a single wireless receiving device, such as a single 220 MHz TDMA antenna module, a single cellular modem, a single wireless local area network (WLAN) antenna module (such as a “Wi-Fi” antenna module capable of communicating using one or more of the IEEE 802.11 standards or another standard), a single WiMax (Worldwide Interoperability for Microwave Access) antenna module, a single satellite antenna module (or a device capable of wirelessly receiving a data message from an orbiting satellite), a single 3G antenna module, a single 4G antenna module, a single 5G antenna module, and the like. As another example, an antenna module122may represent a set or array of antenna modules, such as multiple antenna modules having one or more TDMA antenna modules, cellular modems, Wi-Fi antenna modules, WiMax antenna modules, satellite antenna modules, 3G antenna modules, 4G antenna modules, and/or 5G antenna modules. As shown inFIG.1, the antenna modules122may be disposed at spaced apart locations along the length of the train102. For example, the single or sets of antenna modules represented by each antenna module122may be separated from each other along the length of the train102such that each single antenna module or antenna module set is disposed on a different powered or non-powered unit108,110,112,148,150,152of the train102. The antenna modules122may be configured to send data to and receive data from the off-board remote controller interface104. For example, the off-board remote controller interface104may include an antenna module124that wirelessly communicates the network data from a remote location that is off of the track106to the train102via one or more of the antenna modules122. Alternatively, the antenna modules122may be connectors or other components that engage a pathway over which network data is communicated, such as through an Ethernet connection. The diverse antenna modules122enable the train102to receive the network data transmitted by the off-board remote controller interface104at multiple locations along the train102. Increasing the number of locations where the network data can be received by the train102may increase the probability that all, or a substantial portion, of a message conveyed by the network data is received by the train102. For example, if some antenna modules122are temporarily blocked or otherwise unable to receive the network data as the train102is moving relative to the off-board remote controller interface104, other antenna modules122that are not blocked and are able to receive the network data may receive the network data. An antenna module122receiving data and command control signals from the off-board device104may in turn re-transmit that received data and signals to the appropriate lead communication unit120of the lead locomotive consist114, or the lead communication unit160of the trailing locomotive consist140. Any data packet of information received from the off-board remote controller interface104may include header information or other means of identifying which locomotive in which locomotive consist the information is intended for. Although the lead communication unit120on the lead consist may be the unit that initiates the transmission of data packets forming a message to the off-board, remote controller interface104, all of the lead and trailing communication units may be configured to receive and transmit data packets forming messages. Accordingly, in various alternative implementations according to this disclosure, a command control signal providing operational commands for the lead and trailing locomotives may originate at the remote controller interface104rather than at the lead powered unit108of the lead consist114. Each locomotive or powered unit of the train102may include a car body supported at opposing ends by a plurality of trucks. Each truck may be configured to engage the track106via a plurality of wheels, and to support a frame of the car body. One or more traction motors may be associated with one or all wheels of a particular truck, and any number of engines and generators may be mounted to the frame within the car body to make up the propulsion subsystems116,156on each of the powered units. The propulsion subsystems116,156of each of the powered units may be further interconnected throughout the train102along one or more high voltage power cables in a power sharing arrangement. Energy storage devices (not shown) may also be included for short term or long term storage of energy generated by the propulsion subsystems or by the traction motors when the traction motors are operated in a dynamic braking or generating mode. Energy storage devices may include batteries, ultra-capacitors, flywheels, fluid accumulators, and other energy storage devices with capabilities to store large amounts of energy rapidly for short periods of time, or more slowly for longer periods of time, depending on the needs at any particular time. The DC or AC power provided from the propulsion subsystems116,156or energy storage devices along the power cable may drive AC or DC traction motors to propel the wheels. Each of the traction motors may also be operated in a dynamic braking mode as a generator of electric power that may be provided back to the power cables and/or energy storage devices. Control over engine operation (e.g., starting, stopping, fueling, exhaust aftertreatment, etc.) and traction motor operation, as well as other locomotive controls, may be provided by way of an on-board controller200and various operational control devices housed within a cab supported by the frame of the train102. In some implementations of this disclosure, initiation of these controls may be implemented in the cab of the lead powered unit108in the lead consist114of the train102. In other alternative implementations, initiation of operational controls may be implemented off-board at the remote controller interface104, or at a powered unit of a trailing consist. As discussed above, the various computer control systems involved in the operation of the train102may be distributed across a number of local and/or remote physical locations and communicatively coupled over one or more wireless or wired communication networks. As shown inFIG.2, an exemplary implementation of the control system100may include the on-board controller200. The on-board controller200may include an energy management system232configured to determine, e.g., one or more of throttle requests, dynamic braking requests, and pneumatic braking requests234for one or more of the powered and non-powered units of the train. The energy management system232may be configured to make these various requests based on a variety of measured operational parameters, track grade, track conditions, freight loads, trip plans, and predetermined maps or other stored data with one or more goals of improving availability, safety, timeliness, overall fuel economy and emissions output for individual powered units, consists, or the entire train. The cab of the lead powered unit108,148in each of the consists may also house a plurality of operational control devices and control system interfaces. The operational control devices may be used by an operator to manually control the locomotive, or may be controlled electronically via messages received from off-board the train. Operational control devices may include, among other things, an engine run/isolation switch, a generator field switch, an automatic brake handle, an independent brake handle, a lockout device, and any number of circuit breakers. Manual input devices may include switches, levers, pedals, wheels, knobs, push-pull devices, touch screen displays, etc. Operation of the engines, generators, inverters, converters, and other auxiliary devices may be at least partially controlled by switches or other operational control devices that may be manually movable between a run or activated state and an isolation or deactivated state by an operator of the train102. The operational control devices may be additionally or alternatively activated and deactivated by solenoid actuators or other electrical, electromechanical, or electro-hydraulic devices. The off-board remote controller interface104,204may also require compliance with security protocols to ensure that only designated personnel may remotely activate or deactivate components on-board the train from the off-board remote controller interface after certain prerequisite conditions have been met. The off-board remote controller interface may include various security algorithms or other means of comparing an operator authorization input with a predefined security authorization parameter or level. The security algorithms may also establish restrictions or limitations on controls that may be performed based on the location of a locomotive, authorization of an operator, and other parameters. Circuit breakers may be associated with particular components or subsystems of a locomotive on the train102, and configured to trip when operating parameters associated with the components or subsystems deviate from expected or predetermined ranges. For example, circuit breakers may be associated with power directed to individual traction motors, HVAC components, and lighting or other electrical components, circuits, or subsystems. When a power draw greater than an expected draw occurs, the associated circuit breaker may trip, or switch from a first state to a second state, to interrupt the corresponding circuit. In some implementations of this disclosure, a circuit breaker may be associated with an on-board control system or communication unit that controls wireless communication with the off-board remote controller interface. After a particular circuit breaker trips, the associated component or subsystem may be disconnected from the main electrical circuit of the locomotive102and remain nonfunctional until the corresponding breaker is reset. The circuit breakers may be manually tripped or reset. Alternatively or in addition, the circuit breakers may include actuators or other control devices that can be selectively energized to autonomously or remotely switch the state of the associated circuit breakers in response to a corresponding command received from the off-board remote controller interface104,204. In some embodiments, a maintenance signal may be transmitted to the off-board remote controller interface104,204upon switching of a circuit breaker from a first state to a second state, thereby indicating that action such as a reset of the circuit breaker may be needed. In some situations, train102may travel through several different geographic regions and encounter different operating conditions in each region. For example, different regions may be associated with varying track conditions, steeper or flatter grades, speed restrictions, noise restrictions, and/or other such conditions. Some operating conditions in a given geographic region may also change over time as, for example, track rails wear and speed and/or noise restrictions are implemented or changed. Other circumstantial and contextual conditions, such as distances between sidings, distances from rail yards, limitations on access to maintenance resources, and other such considerations may vary throughout the course of mission. Operators may therefore wish to implement certain control parameters in certain geographic regions to address particular operating conditions. To help operators implement desired control strategies based on the geographic location of the train102, the on-board controller200may be configured to include a graphical user interface (GUI) that allows operators and/or other users to establish and define the parameters of geo-fences along a travel route. A geo-fence is a virtual barrier that may be set up in a software program and used in conjunction with global positioning systems (GPS) or radio frequency identification (RFID) to define geographical boundaries. As an example, a geo-fence may be defined along a length of track that has a grade greater than a certain threshold. A first geo-fence may define a no-stop zone, where the track grade is so steep that a train will not be able to traverse the length of track encompassed by the first geo-fence if allowed to stop. A second geo-fence may define an unfavorable-stop zone, where the grade is steep enough that a train stopping in the unfavorable-stop zone may be able to traverse the second geo-fence after a stop, but will miss a trip objective such as arriving at a destination by a certain time. A third geo-fence may define a favorable-stop zone, where the grade of the track is small enough that the train will be able to come to a complete stop within the favorable-stop zone for reasons such as repair or adjustment of various components or subsystems, and then resume travel and traverse the third geo-fence while meeting all trip objectives. The remote controller interface104may include a GUI configured to display information and receive user inputs associated with the train. The GUI may be a graphic display tool including menus (e.g., drop-down menus), modules, buttons, soft keys, toolbars, text boxes, field boxes, windows, and other means to facilitate the conveyance and transfer of information between a user and remote controller interface104,204. Access to the GUI may require user authentication, such as, for example, a username, a password, a pin number, an electromagnetic passkey, etc., to display certain information and/or functionalities of the GUI. The energy management system232of the controller200on-board a lead locomotive208may be configured to automatically determine one or more of throttle requests, dynamic braking requests, and pneumatic braking requests234for one or more of the powered and non-powered units of the train. The energy management system232may be configured to make these various requests based on a variety of measured operational parameters, track conditions, freight loads, trip plans, and predetermined maps or other stored data with a goal of improving one or more of availability, safety, timeliness, overall fuel economy and emissions output for individual locomotives, consists, or the entire train. Some of the measured operational parameters such as track grade or other track conditions may be associated with one or more predetermined geo-fences. The cab of the lead locomotive208in each of the consists114,140along the train102may also house a plurality of input devices, operational control devices, and control system interfaces. The input devices may be used by an operator to manually control the locomotive, or the operational control devices may be controlled electronically via messages received from off-board the train. The input devices and operational control devices may include, among other things, an engine run/isolation switch, a generator field switch, an automatic brake handle (for the entire train and locomotives), an independent brake handle (for the locomotive only), a lockout device, and any number of circuit breakers. Manual input devices may include switches, levers, pedals, wheels, knobs, push-pull devices, and touch screen displays. The controller200may also include a microprocessor-based locomotive control system237having at least one programmable logic controller (PLC), a cab electronics system238, and an electronic air (pneumatic) brake system236, all mounted within a cab of the locomotive. The cab electronics system238may comprise at least one integrated display computer configured to receive and display data from the outputs of one or more of machine gauges, indicators, sensors, and controls. The cab electronics system238may be configured to process and integrate the received data, receive command signals from the off-board remote controller interface204, and communicate commands such as throttle, dynamic braking, and pneumatic braking commands233to the microprocessor-based locomotive control system237. The microprocessor-based locomotive control system237may be communicatively coupled with the traction motors, engines, generators, braking subsystems, input devices, actuators, circuit breakers, and other devices and hardware used to control operation of various components and subsystems on the locomotive. In various alternative implementations of this disclosure, some operating commands, such as throttle and dynamic braking commands, may be communicated from the cab electronics system238to the locomotive control system237, and other operating commands, such as braking commands, may be communicated from the cab electronics system238to a separate electronic air brake system236. One of ordinary skill in the art will recognize that the various functions performed by the locomotive control system237and electronic air brake system236may be performed by one or more processing modules or controllers through the use of hardware, software, firmware, or various combinations thereof. Examples of the types of controls that may be performed by the locomotive control system237may include radar-based wheel slip control for improved adhesion, automatic engine start stop (AESS) for improved fuel economy, control of the lengths of time at which traction motors are operated at temperatures above a predetermined threshold, control of generators/alternators, control of inverters/converters, the amount of exhaust gas recirculation (EGR) and other exhaust aftertreatment processes performed based on detected levels of certain pollutants, and other controls performed to improve safety, increase overall fuel economy, reduce overall emission levels, and increase longevity and availability of the locomotives. The at least one PLC of the locomotive control system237may also be configurable to selectively set predetermined ranges or thresholds for monitoring operating parameters of various subsystems. When a component detects that an operating parameter has deviated from the predetermined range, or has crossed a predetermined threshold, a maintenance signal may be communicated off-board to the remote controller interface204. The at least one PLC of the locomotive control system237may also be configurable to receive one or more command signals indicative of at least one of a throttle command, a dynamic braking readiness command, and an air brake command233, and output one or more corresponding command control signals configured to at least one of change a throttle position, activate or deactivate dynamic braking, and apply or release a pneumatic brake, respectively. The cab electronics system238may provide integrated computer processing and display capabilities on-board the train102, and may be communicatively coupled with a plurality of cab gauges, indicators, and sensors, as well as being configured to receive commands from the remote controller interface204. The cab electronics system238may be configured to process outputs from one or more of the gauges, indicators, and sensors, and supply commands to the locomotive control system237. In various implementations, the remote controller interface204may comprise a distributed system of servers, on-board and/or off-board the train, or a single laptop, hand-held device, or other computing device or server with software, encryption capabilities, and Internet access for communicating with the on-board controller200of the lead locomotive208of a lead consist and the lead locomotive248of a trailing consist. Control command signals generated by the cab electronics system238on the lead locomotive208of the lead consist may be communicated to the locomotive control system237of the lead locomotive of the lead consist, and may be communicated in parallel via a WiFi/cellular modem250off-board to the remote controller interface204. The lead communication unit120on-board the lead locomotive of the lead consist may include the WiFi/cellular modem250and any other communication equipment required to modulate and transmit the command signals off-board the locomotive and receive command signals on-board the locomotive. As shown inFIG.2, the remote controller interface204may relay commands received from the lead locomotive208via another WiFi/cellular modem250to another cab electronics system238on-board the lead locomotive248of the trailing consist. The control systems and interfaces on-board and off-board the train may embody single or multiple microprocessors, field programmable gate arrays (FPGAs), digital signal processors (DSPs), programmable logic controllers (PLCs), etc., that include means for controlling operations of the train102in response to operator requests, built-in constraints, sensed operational parameters, and/or communicated instructions from the remote controller interface104,204. Numerous commercially available microprocessors can be configured to perform the functions of these components. Various known circuits may be associated with these components, including power supply circuitry, signal-conditioning circuitry, actuator driver circuitry (i.e., circuitry powering solenoids, motors, or piezo actuators), and communication circuitry. The locomotives208,248may be outfitted with any number and type of sensors known in the art for generating signals indicative of associated operating parameters. In one example, a locomotive208,248may include a temperature sensor configured to generate a signal indicative of a coolant temperature of an engine on-board the locomotive. Additionally or alternatively, sensors may include brake temperature sensors, exhaust sensors, fuel level sensors, pressure sensors, knock sensors, reductant level or temperature sensors, speed sensors, motion detection sensors, location sensors, or any other sensor known in the art. The signals generated by the sensors may be directed to the cab electronics system238for further processing and generation of appropriate commands. Any number and type of warning devices may also be located on-board each locomotive, including an audible warning device and/or a visual warning device. Warning devices may be used to alert an operator on-board a locomotive of an impending operation, for example startup of the engine(s). Warning devices may be triggered manually from on-board the locomotive (e.g., in response to movement of a component or operational control device to the run state) and/or remotely from off-board the locomotive (e.g., in response to control command signals received from the remote controller interface204.) When triggered from off-board the locomotive, a corresponding command signal used to initiate operation of the warning device may be communicated to the on-board controller200and the cab electronics system238. The on-board controller200and the off-board remote controller interface204may include any means for monitoring, recording, storing, indexing, processing, and/or communicating various operational aspects of the locomotive208,248. These means may include components such as, for example, a memory, one or more data storage devices, a central processing unit, or any other components that may be used to run an application. Furthermore, although aspects of the present disclosure may be described generally as being stored in memory, one skilled in the art will appreciate that these aspects can be stored on or read from different types of computer program products or non-transitory computer-readable media such as computer chips and secondary storage devices, including hard disks, floppy disks, optical media, CD-ROM, or other forms of RAM or ROM. The off-board remote controller interface204may be configured to execute instructions stored on non-transitory computer readable medium to perform methods of remote control of the locomotive230. That is, as will be described in more detail in the following section, on-board control (manual and/or autonomous control) of some operations of the locomotive (e.g., operations of traction motors, engine(s), circuit breakers, etc.) may be selectively overridden by the off-board remote controller interface204. Remote control of the various powered and non-powered units on the train102through communication between the on-board cab electronics system238and the off-board remote controller interface204may be facilitated via the various communication units120,126,160,166spaced along the train102. The communication units may include hardware and/or software that enables sending and receiving of data messages between the powered units of the train and the off-board remote controller interfaces. The data messages may be sent and received via a direct data link and/or a wireless communication link, as desired. The direct data link may include an Ethernet connection, a connected area network (CAN), or another data link known in the art. The wireless communications may include satellite, cellular, infrared, and any other type of wireless communications that enable the communication units to exchange information between the off-board remote controller interfaces and the various components and subsystems of the train102. As shown in the exemplary embodiment ofFIG.2, the cab electronics system238may be configured to receive the requests234after they have been processed by a locomotive interface gateway (LIG)235, which may also enable modulation and communication of the requests through a WiFi/cellular modem250to the off-board remote controller interface (back office)204. The cab electronics system238may be configured to communicate commands (e.g., throttle, dynamic braking, and braking commands233) to the locomotive control system237and an electronic air brake system236on-board the lead locomotive208in order to autonomously control the movements and/or operations of the lead locomotive. In parallel with communicating commands to the locomotive control system237of the lead locomotive208, the cab electronics system238on-board the lead locomotive208of the lead consist may also communicate commands to the off-board remote controller interface204. The commands may be communicated either directly or through the locomotive interface gateway235, via the WiFi/cellular modem250, off-board the lead locomotive208of the lead consist to the remote controller interface204. The remote controller interface204may then communicate the commands received from the lead locomotive208to the trailing consist lead locomotive248. The commands may be received at the trailing consist lead locomotive248via another WiFi/cellular modem250, and communicated either directly or through another locomotive interface gateway235to a cab electronics system238. The cab electronics system238on-board the trailing consist lead locomotive248may be configured to communicate the commands received from the lead locomotive208of the lead consist to a locomotive control system237and an electronic air brake system236on-board the trailing consist lead locomotive248. The commands from the lead locomotive208of the lead consist may also be communicated via the network connection118from the trailing consist lead locomotive248to one or more trailing powered units150of the trailing consist140. The result of configuring all of the lead powered units of the lead and trailing consists to communicate via the off-board remote controller interface204is that the lead powered unit of each trailing consist may respond quickly and in close coordination with commands responded to by the lead powered unit of the lead consist. Additionally, each of the powered units in various consists along a long train may quickly and reliably receive commands such as throttle, dynamic braking, and pneumatic braking commands234initiated by a lead locomotive in a lead consist regardless of location and conditions. The integrated cab electronics systems238on the powered units of the lead consist114and on the powered units of the trailing consist140may also be configured to receive and generate commands for configuring or reconfiguring various switches, handles, and other operational control devices on-board each of the powered units of the train as required before the train begins on a journey, or after a failure occurs that requires reconfiguring of all or some of the powered units. Examples of switches and handles that may require configuring or reconfiguring before a journey or after a failure may include an engine run switch, a generator field switch, an automatic brake handle, and an independent brake handle. Remotely controlled actuators on-board the powered units in association with each of the switches and handles may enable remote, autonomous configuring and reconfiguring of each of the devices. For example, before the train begins a journey, or after a critical failure has occurred on one of the lead or trailing powered units, commands may be sent from the off-board remote controller interface204to any powered unit in order to automatically reconfigure all of the switches and handles as required on-board each powered unit without requiring an operator to be on-board the train. Following the reconfiguring of all of the various switches and handles on-board each locomotive, the remote controller interface may also send messages to the cab electronics systems on-board each locomotive appropriate for generating other operational commands such as changing throttle settings, activating or deactivating dynamic braking, and applying or releasing pneumatic brakes. This capability saves the time and expense of having to delay the train while sending an operator to each of the powered units on the train to physically switch and reconfigure all of the devices required. FIG.3is an illustration of a system according to an exemplary embodiment of this disclosure for utilizing real-time data for predictive analysis of the performance of a monitored computer system, such as train control system100shown inFIG.1. The system300may include a series of sensors (i.e., Sensor A304, Sensor B306, Sensor C308) interfaced with the various components of a monitored system302, a data acquisition hub312, an analytics server316, and a client device328. The monitored system302may include one or more of the train control systems illustrated inFIG.2, such as an energy management system, a cab electronics system, and a locomotive control system. It should be understood that the monitored system302can be any combination of components whose operations can be monitored with sensors and where each component interacts with or is related to at least one other component within the combination. For a monitored system302that is a train control system, the sensors may include brake temperature sensors, exhaust sensors, fuel level sensors, pressure sensors, knock sensors, reductant level or temperature sensors, generator power output sensors, voltage or current sensors, speed sensors, motion detection sensors, location sensors, wheel temperature or bearing temperature sensors, or any other sensor known in the art for monitoring various train operational parameters. The sensors are configured to provide output values for system parameters that indicate the operational status and/or “health” of the monitored system302. The sensors may include sensors for monitoring the operational status and/or health of the various physical systems associated with operation of a train, as well as the operational status of the various computer systems and subsystems associated with operation of the train. The sensors may also be configured to measure additional data that can affect system operation. For example, sensor output can include environmental information, e.g., temperature, humidity, etc., which can impact the operation and efficiency of the various train control systems. In one exemplary embodiment, the various sensors304,306,308may be configured to output data in an analog format. For example, electrical power sensor measurements (e.g., voltage, current, etc.) are sometimes conveyed in an analog format as the measurements may be continuous in both time and amplitude. In another embodiment, the sensors may be configured to output data in a digital format. For example, the same electrical power sensor measurements may be taken in discrete time increments that are not continuous in time or amplitude. In still another embodiment, the sensors may be configured to output data in either an analog or digital format depending on the sampling requirements of the monitored system302. The sensors can be configured to capture output data at split-second intervals to effectuate “real time” data capture. For example, in one embodiment, the sensors can be configured to generate hundreds of thousands of data readings per second. It should be appreciated, however, that the number of data output readings taken by a sensor may be set to any value as long as the operational limits of the sensor and the data processing capabilities of the data acquisition hub312are not exceeded. Each sensor may be communicatively connected to the data acquisition hub312via an analog or digital data connection310. The data acquisition hub312may be a standalone unit or integrated within the analytics server316and can be embodied as a piece of hardware, software, or some combination thereof. In one embodiment, the data connection310is a “hard wired” physical data connection (e.g., serial, network, etc.). For example, a serial or parallel cable connection between the sensor and the hub312. In another embodiment, the data connection310is a wireless data connection. For example, a 5G radio frequency (RF) cellular connection, BLUETOOTH™, infrared or equivalent connection between the sensor and the hub312. The data acquisition hub312may be configured to communicate “real-time” data from the monitored system302to the analytics server316using a network connection314. In one embodiment, the network connection314is a “hardwired” physical connection. For example, the data acquisition hub312may be communicatively connected (via Category 5 (CATS), fiber optic or equivalent cabling) to a data server (not shown) that is communicatively connected (via CATS, fiber optic or equivalent cabling) through the Internet and to the analytics server316server, the analytics server316being also communicatively connected with the Internet (via CATS, fiber optic, or equivalent cabling). In another embodiment, the network connection314is a wireless network connection (e.g., 5G cellular, Wi-Fi, WLAN, etc.). For example, utilizing an 802.11a/b/g or equivalent transmission format. In practice, the network connection utilized is dependent upon the particular requirements of the monitored system302. Data acquisition hub312may also be configured to supply warning and alarms signals as well as control signals to monitored system302and/or sensors304,306, and308as described in more detail below. As shown inFIG.3, in one embodiment, the analytics server316may host an analytics engine318, a virtual system modeling engine324, a calibration engine334, and several databases326,330, and332. Additional engines or processing modules may also be included in analytics server316, such as an energy management machine learning modeling engine, an operator behavior modeling engine, a simulation engine, and other machine learning or artificial intelligence engines or processing modules. The virtual system modeling engine324can be, e.g., a computer modeling system. In this context, the modeling engine can be used to precisely model and mirror the actual train control systems and subsystems. Analytics engine318can be configured to generate predicted data for the monitored systems and analyze differences between the predicted data and the real-time data received from data acquisition hub312. Analytics server316may be interfaced with a monitored train control system302via sensors, e.g., sensors304,306, and308. The various sensors are configured to supply real-time data from the various physical components and computer systems and subsystems of train102. The real-time data is communicated to analytics server316via data acquisition hub312and network314. Hub312can be configured to provide real-time data to analytics server316as well as alarming, sensing and control featured for the monitored system302, such as the train control system100. The real-time data from data acquisition hub312can be passed to a comparison engine, which can be separate from or form part of analytics engine318. The comparison engine can be configured to continuously compare the real-time data with predicted values generated by virtual system modeling engine324or another simulation engine included as part of analytics server316. Based on the comparison, the comparison engine can be further configured to determine whether deviations between the real-time values and the expected values exist, and if so to classify the deviation, e.g., high, marginal, low, etc. The deviation level can then be communicated to a decision engine, which can also be included as part of analytics engine318or as a separate processing module. The decision engine can be configured to look for significant deviations between the predicted values and real-time values as received from the comparison engine. If significant deviations are detected, the decision engine can also be configured to determine whether an alarm condition exists, activate the alarm and communicate the alarm to a Human-Machine Interface (HMI) for display in real-time via, e.g., client328. The decision engine of analytics engine318can also be configured to perform root cause analysis for significant deviations in order to determine the interdependencies and identify any failure relationships that may be occurring. The decision engine can also be configured to determine health and performance levels and indicate these levels for the various processes and equipment via the HMI of client328. All of which, when combined with the analytical and machine learning capabilities of analytics engine318allows the operator to minimize the risk of catastrophic equipment failure by predicting future failures and providing prompt, informative information concerning potential/predicted failures before they occur. Avoiding catastrophic failures reduces risk and cost, and maximizes facility performance and up time. A simulation engine that may be included as part of analytics server316may operate on complex logical models of the various control systems and subsystems of on-board controller200and train control system100. These models may be continuously and automatically synchronized with the actual status of the control systems based on the real-time data provided by the data acquisition hub312to analytics server316. In other words, the models are updated based on current switch status, breaker status, e.g., open-closed, equipment on/off status, etc. Thus, the models are automatically updated based on such status, which allows a simulation engine to produce predicted data based on the current train operational status. This in turn, allows accurate and meaningful comparisons of the real-time data to the predicted data. Example models that can be maintained and used by analytics server316may include models used to calculate train trip optimization, determine component operational requirements for improved asset life expectancy, determine efficient allocation and utilization of computer control systems and computer resources, etc. In certain embodiments, data acquisition hub312may also be configured to supply equipment identification associated with the real-time data. This identification can be cross referenced with identifications provided in the models. In one embodiment, if a comparison performed by a comparison engine indicates that a differential between a real-time sensor output value and an expected value exceeds a threshold value but remains below an alarm condition (i.e., alarm threshold value), a calibration request may be generated by the analytics engine318. If the differential exceeds the alarm threshold value, an alarm or notification message may be generated by the analytics engine318. The alarm or notification message may be sent directly to the client (i.e., user)328for display in real-time on a web browser, pop-up message box, e-mail, or equivalent on the client328display panel. In another embodiment, the alarm or notification message may be sent to a wireless mobile device to be displayed for the user by way of a wireless router or equivalent device interfaced with the analytics server316. The alarm can be indicative of a need for a repair event or maintenance, such as synchronization of any computer control systems that are no longer communicating within allowable latency parameters. The responsiveness, calibration, and synchronization of various computer systems can also be tracked by comparing expected operational characteristics based on historical data associated with the various systems and subsystems of the train to actual characteristics measured after implementation of control commands, or by comparing actual measured parameters to predicted parameters under different operating conditions. Virtual system modeling engine324may create multiple models that can be stored in the virtual system model database326. Machine learning algorithms may be employed by virtual system modeling engine324to create a variety of virtual model applications based on real time and historical data gathered by data acquisition hub314from a large variety of sensors measuring operational parameters of train102. The virtual system models may include components for modeling reliability of various train physical systems and distributed computer control systems. In addition, the virtual system models created by virtual system modeling engine324may include dynamic control logic that permits a user to configure the models by specifying control algorithms and logic blocks in addition to combinations and interconnections of train operational components and control systems. Virtual system model database326can be configured to store the virtual system models, and perform what-if simulations. In other words, the database of virtual system models can be used to allow a system designer to make hypothetical changes to the train control systems and test the resulting effect, without having to actually take the train out of service or perform costly and time consuming analysis. Such hypothetical simulations performed by virtual systems modeling engine324can be used to learn failure patterns and signatures as well as to test proposed modifications, upgrades, additions, etc., for the train control system. The real-time data, as well as detected trends and patterns produced by analytics engine318can be stored in real-time data acquisition databases330and332. According to various exemplary embodiments of this disclosure, a method of using artificial intelligence for maintaining synchronization between centralized and distributed train control models may include providing a centralized or cloud-based computer processing system in one or more of a back-office server or a plurality of servers remote from a train, and providing one or more distributed, edge-based computer processing systems on-board one or more locomotives of the train, wherein each of the distributed computer processing systems is communicatively connected to the centralized computer processing system. The method may further include receiving, at data acquisition hub312communicatively connected to one or more of databases and a plurality of sensors associated with one or more locomotives or other components of a train, real-time and historical configuration, structural, and operational data in association with inputs derived from real time and historical contextual data relating to a plurality of trains operating under a variety of different conditions for use as training data. The method may still further include creating, using a centralized virtual system modeling engine included in the centralized computer processing system, one or more centralized models of one or more actual train control systems in operation on-board the one of more locomotives of the train based at least in part on data received from the data acquisition hub, wherein a first one of the centralized models is utilized in a process of generating a first set of output control commands for a first train control scenario implemented by an energy management system associated with the one or more locomotives, and creating, using one or more distributed virtual system modeling engines included in the one or more distributed computer processing systems, one or more edge-based models of one or more actual train control systems in operation on-board the one or more locomotives of the train based at least in part on data received from the data acquisition hub, wherein a first one of the edge-based models is utilized in a process of generating a second set of output control commands for a second train control scenario implemented by the energy management system associated with the one or more locomotives. A machine learning engine included in at least one of the centralized and distributed computer processing systems may receive the training data from the data acquisition hub, receive the first centralized model from the centralized virtual system modeling engine, receive the first edge-based model from one of the distributed virtual system modeling engines, compare the first set of output control commands generated by the first centralized model for the first train control scenario and the second set of output control commands generated by the first edge-based model for the second train control scenario, and train a learning system using the training data to enable the machine learning engine to safely mitigate a divergence discovered between the first and second sets of output control commands using a learning function including at least one learning parameter. The machine learning engine may train the learning system by providing the training data as an input to the learning function, the learning function being configured to use the at least one learning parameter to generate an output based on the input, causing the learning function to generate the output based on the input, comparing the output to one or more of the first and second sets of output control commands to determine a difference between the output and the one or more of the first and second sets of output control commands, and modifying the at least one learning parameter and the output of the learning function to decrease the difference responsive to the difference being greater than a threshold difference and based at least in part on actual real time and historical information on in-train forces and train operational characteristics acquired from a plurality of trains operating under a variety of different conditions. The method may also include adjusting one or more of throttle requests, dynamic braking requests, and pneumatic braking requests for the one or more locomotives of the train using an energy management system associated with the one or more locomotives of the train, wherein the adjusting is based at least in part on the modified output of the learning function used by the learning system which has been trained by the machine learning engine. As discussed above, the virtual system model may be periodically calibrated and synchronized with “real-time” sensor data outputs so that the virtual system model provides data output values that are consistent with the actual “real-time” values received from the sensor output signals. Unlike conventional systems that use virtual system models primarily for system design and implementation purposes (i.e., offline simulation and facility planning), the virtual system train control models or other virtual computer system models described herein may be updated and calibrated with the real-time system operational data to provide better predictive output values. A divergence between the real-time sensor output values and the predicted output values may generate either an alarm condition for the values in question and/or a calibration request that is sent to a calibration engine334. The analytics engine318and virtual system modeling engine324may be configured to implement pattern/sequence recognition into a real-time decision loop that, e.g., is enabled by machine learning. The types of machine learning implemented by the various engines of analytics server316may include various approaches to learning and pattern recognition. The machine learning may include the implementation of associative memory, which allows storage, discovery, and retrieval of learned associations between extremely large numbers of attributes in real time. At a basic level, an associative memory stores information about how attributes and their respective features occur together. The predictive power of the associative memory technology comes from its ability to interpret and analyze these co-occurrences and to produce various metrics. Associative memory is built through “experiential” learning in which each newly observed state is accumulated in the associative memory as a basis for interpreting future events. Thus, by observing normal system operation over time, and the normal predicted system operation over time, the associative memory is able to learn normal patterns as a basis for identifying non-normal behavior and appropriate responses, and to associate patterns with particular outcomes, contexts or responses. The analytics engine318is also better able to understand component mean time to failure rates through observation and system availability characteristics. This technology in combination with the virtual system model can present a novel way to digest and comprehend alarms in a manageable and coherent way. The machine learning algorithms assist in uncovering the patterns and sequencing of alarms to help pinpoint the location and cause of any actual or impending failures of physical systems or computer systems. Typically, responding to the types of alarms that may be encountered when operating a train is done manually by experts who have gained familiarity with the system through years of experience. However, at times, the amount of information is so great that an individual cannot respond fast enough or does not have the necessary expertise. An “intelligent” system employing machine learning algorithms that observe human operator actions and recommend possible responses could improve train operational safety by supporting an existing operator, or even managing the various train control systems autonomously. Current simulation approaches for maintaining transient stability and synchronization between the various train control systems may involve traditional numerical techniques that typically do not test all possible scenarios. The problem is further complicated as the numbers of components and pathways increase. Through the application of the machine learning algorithms and virtual system modeling according to various embodiments of this disclosure, by observing simulations of various outcomes determined by different train control inputs and operational parameters, and by comparing them to actual system responses, it may be possible to improve the simulation process, thereby improving the overall design of future train control systems. The virtual system model database326, as well as databases330and332, can be configured to store one or more virtual system models, virtual simulation models, and real-time data values, each customized to a particular system being monitored by the analytics server316. Thus, the analytics server316can be utilized to monitor more than one train control system or other computer system associated with the train at a time. As depicted herein, the databases326,330, and332can be hosted on the analytics server316and communicatively interfaced with the analytics engine318. In other embodiments, databases326,330, and332can be hosted on one or more separate database servers (not shown) that are communicatively connected to the analytics server316in a manner that allows the virtual system modeling engine324and analytics engine318to access the databases as needed. In one embodiment, the client328may modify the virtual system model stored on the virtual system model database326by using a virtual system model development interface including well-known modeling tools that are separate from the other network interfaces. For example, dedicated software applications that run in conjunction with the network interface may allow a client328to create or modify the virtual system models. The client328may utilize a variety of network interfaces (e.g., web browsers) to access, configure, and modify the sensors (e.g., configuration files, etc.), analytics engine318(e.g., configuration files, analytics logic, etc.), calibration parameters (e.g., configuration files, calibration logic, etc.), virtual system modeling engine324(e.g., configuration files, simulation parameters, etc.) and virtual system models of the various train control systems under management (e.g., virtual system model operating parameters and configuration files). Correspondingly, data from those various components of the monitored system302can be displayed on a client328display panel for viewing by a system administrator or equivalent. As described above, analytics server316may be configured to synchronize and/or calibrate the various train control systems and subsystems in the physical world with virtual and/or simulated models and report, e.g., via visual, real-time display, deviations between the two as well as system health, alarm conditions, predicted failures, etc. In the physical world, sensors304,306,308produce real-time data for the various train control processes and equipment that make up the monitored system302. In the virtual world, simulations generated by the virtual system modeling engine324may provide predicted values, which are correlated and synchronized with the real-time data. The real-time data can then be compared to the predicted values so that differences can be detected. The significance of these differences can be determined to characterize the health status of the various train control systems and subsystems. The health status can then be communicated to a user on-board the train or off-board at a remote control facility via alarms and indicators, as well as to client328, e.g., via web pages. In some embodiments, as discussed above, the analytics engine318may include a machine learning engine. The machine learning engine may include a train control strategy engine configured to receive training data from a data acquisition hub communicatively coupled to one or more sensors associated with one or more locomotives of a train. The training data may include real-time configuration and operational data, and may be communicated to the data acquisition hub and to the machine learning engine over wireless and/or wired networks. The training data may be relevant to train control operations, including a plurality of first input conditions and a plurality of first train behaviors or first actions to be taken by an operator of the train associated with the first input conditions. The training data may include historical operational data acquired by various sensors associated with one or more locomotives of the train during one or more actual train runs. The training data may also include data indicative of specific actions taken by a train operator, or directly or indirectly resulting from actions taken by the train operator, under a large variety of operating conditions, and on trains with the same or different equipment, different operational characteristics, and different parameters. The machine learning engine and train control strategy engine may be configured to train a learning system using the training data to generate a second train behavior or second action to be taken by the train operator based on a second input condition. The train behaviors generated by the machine learning engine may be integrated with and implemented by various train control systems and subsystems, such as the cab electronics system238, and locomotive control system237shown inFIG.2. The resultant controls performed by the various train control systems and subsystems based on outputs from the machine learning engine may improve the operation of trains that are being operated fully manually, semi-autonomously, or fully autonomously by enabling a shared mental model of train operator behavior between experienced human train operators or engineers, less experienced engineers, and autonomous or semi-autonomous train control systems. For example, a learning system according to various embodiments of this disclosure can be trained to learn how experienced human engineers respond to different inputs under various operating conditions, such as during the automatic implementation of train control commands by trip optimizer programs, positive train control (PTC) algorithms, and automatic train operations (ATO), during extreme weather conditions, during emergency conditions caused by other train traffic or equipment failure on the train, while approaching and maneuvering in train yards, and under other train operating conditions. The trained learning system can then improve train control systems being operated by less experienced engineers, semi-autonomously, or fully autonomously to perform operational maneuvers in a manner consistent with how the experienced human engineers would respond under similar conditions. Unlike existing methods for maneuvering autonomous vehicles, such as by following a control law that optimizes a variable such as a throttle notch setting at the expense of performing other operational maneuvers that an experienced human engineer would readily understand, the machine learning engine disclosed herein may allow less experienced train engineers or autonomously-operated trains to execute maneuvers including selecting optimum control settings for a particular set of operational conditions that cannot be reduced to a control law. Train control systems that include a machine learning engine configured to encode real human engineer behavior into a train control strategy engine may enable less experienced train engineers, or semi-autonomously or fully autonomously operated trains to perform optimized train handling across different terrains, with different trains, and under different operating conditions. In some embodiments, the machine learning engine of analytics engine318may be configured to receive training data including a plurality of first input conditions and a plurality of first train behaviors associated with the first input conditions. The first input conditions can represent conditions which, when applied to a train operating system or when perceived by a train engineer, lead to a particular train behavior being performed. A “train behavior” as used herein, refers to any train operational characteristics, in-train-forces (ITF), train speeds, train accelerations, decelerations, pneumatic or dynamic braking, fuel consumption, or other train functions that may directly or indirectly result from an action taken by a human engineer or autonomous or semi-autonomous controller. The input conditions can include a state of a particular locomotive in a consist, a representation or state of an environment surrounding the consist, including behavior of other trains or locomotives on the same or interconnected tracks in the same geographical area, and commands, instructions, or other communications received from other entities. The first input conditions can include an indication of a maneuver command. A maneuver command can be a command, instruction, or other information associated with a maneuver that a locomotive is expected, desired, or required to perform. Maneuver commands can vary in specificity and may include commands specific to an exact set of tracks along which the locomotive is required to travel to reach a general objective, specific throttle notch settings for one or more lead and/or trailing locomotives at different locations, under different loads or trip parameters, braking and dynamic braking commands, and other control settings to be implemented by the cab electronics system, throttle, dynamic braking and braking commands, and the locomotive control system. The machine learning engine may be configured to train a learning system using the training data to generate a second train behavior based on a second input condition. The machine learning engine can provide the training data as an input to the learning system, monitor an output of the learning system, and modify the learning system based on the output. The machine learning engine can compare the output to the plurality of first train behaviors, determine a difference between the output and the plurality of first train behaviors, and modify the learning system based on the difference between the output and the plurality of first train behaviors. For example, the plurality of first train behaviors may represent a goal or objective that the machine learning engine is configured to cause the learning system to match, by modifying characteristics of the learning system until the difference between the output and the plurality of first train behaviors is less than a threshold difference. In some embodiments, the machine learning engine can be configured to modify characteristics of the learning system to minimize a cost function or optimize some other objective function or goal, such as reduced emissions, during a particular train trip or over a plurality of trips or time periods. The machine learning engine can group the training data into a first set of training data for executing a first learning protocol, and a second set of training data for executing a second learning protocol. The learning system can include a learning function configured to associate the plurality of input conditions to the plurality of first train behaviors, and the learning function can define characteristics, such as a plurality of parameters. The machine learning engine can be configured to modify the plurality of parameters to decrease the difference between the output of the learning system (e.g., the output of the learning function) and the plurality of first train behaviors. Once trained, the learning system can be configured to receive the second input condition and apply the learning function to the second input condition to generate the second train behavior. In some embodiments, the learning system may include a neural network. The neural network can include a plurality of layers each including one or more nodes, such as a first layer (e.g., an input layer), a second layer (e.g., an output layer), and one or more hidden layers. The neural network can include characteristics such weights and biases associated with computations that can be performed between nodes of layers. The machine learning engine can be configured to train the neural network by providing the first input conditions to the first layer of the neural network. The neural network can generate a plurality of first outputs based on the first input conditions, such as by executing computations between nodes of the layers. The machine learning engine can receive the plurality of first outputs, and modify a characteristic of the neural network to reduce a difference between the plurality of first outputs and the plurality of first train behaviors. In some embodiments, the learning system may include a classification engine, such as a support vector machine (SVM). The SVM can be configured to generate a mapping of first input conditions to first train behaviors. For example, the machine learning engine may be configured to train the SVM to generate one or more rules configured to classify training pairs (e.g., each first input condition and its corresponding first train behavior). The classification of training pairs can enable the mapping of first input conditions to first train behaviors by classifying particular first train behaviors as corresponding to particular first input conditions. Once trained, the learning system can generate the second train behavior based on the second input condition by applying the mapping or classification to the second input condition. Another exemplary classification engine that may be utilized in a learning system according to various implementations of this disclosure may include a decision tree-based algorithm such as Random Forests® or Random Decision Forests. Decision trees may be used for classification, but also for regression problems. When training a dataset to classify a variable, the idea of a decision tree is to divide the data into smaller datasets based on a certain feature value until the target variables all fall under one category. To avoid overfitting, variations of decision tree classifiers such as a Random Forests® classifier or an AdaBoost classifier may be employed. A Random Forests® classifier fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample sizes are always the same as the original input sample size but the samples of the original data frame are drawn with replacements (bootstrapping). An AdaBoost classifier begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset where the weights of incorrectly classified instances are adjusted such that subsequent classifier focus more on difficult cases. Yet another exemplary classification engine may include a Bayesian estimator such as a naïve Bayes classifier, which is a family of probabilistic classifiers based on applying Bayes theorem with strong (naïve) independence assumptions between the features. A naïve Bayes classifier may be trained by a family of algorithms based on a common principle, such as assuming that the value of a particular feature is independent of the value of any other feature, given the class variable. This type of classifier may also be trained effectively using supervised learning, which is a machine learning task of learning a function that maps an input to an output based on example input-output pairs. The function is inferred from labeled training data consisting of a set of training examples. Each example is a pair consisting of an input object (typically a vector) and a desired output value (also called a supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. In some embodiments, the learning system may include a Markov decision process engine. The machine learning engine may be configured to train the Markov decision process engine to determine a policy based on the training data, the policy indicating, representing, or resembling how a particular locomotive would behave while controlled by an experienced human engineer in response to various input conditions. The machine learning engine can provide the first input conditions to the Markov decision process engine as a set or plurality of states (e.g., a set or plurality of finite states). The machine learning engine can provide the first train behaviors to the Markov decision process as a set or plurality of actions (e.g., a set or plurality of finite actions). The machine learning engine can execute the Markov decision process engine to determine the policy that best represents the relationship between the first input conditions and first train behaviors. It will be appreciated that in various embodiments, the learning system can include various other machine learning engines and algorithms, as well as combinations of machine learning engines and algorithms, that can be executed to determine a relationship between the plurality of first input conditions and the plurality of first train behaviors and thus train the learning system. In some implementations of this disclosure, train configuration and operational data may be provided to the machine learning engine over a 5G cellular radio frequency telecommunications network interconnecting multiple nodes of a distributed computer control system. But alternative embodiments of the present disclosure may be implemented over a variety of data communication network environments using software, hardware, or a combination of hardware and software to provide the distributed processing functions. INDUSTRIAL APPLICABILITY The machine learning engine and virtual system modeling engine of the present disclosure may be applicable to any grouping of vehicles such as locomotives or systems of other powered machines where remote access to particular functions of the machines may be desirable. System processing associated with such groupings of vehicles or other machines may be highly distributed as a result of recent advances and cost improvements in sensing technology and communication of large amounts of operational and configuration data acquired from sensors associated with the vehicles or other machines. Communication networks such as 5G mobile networks allow for increased bandwidths, increased throughput, and faster data speeds than many existing telecommunication technologies, thereby enabling the interconnection of large numbers of devices on mobile platforms such as vehicles, and the transmission of data from those interconnected devices at much faster speeds and with much more accuracy than currently available. These improvements in data transmission capabilities allow for the remote distribution of the various computer control systems and subsystems that were traditionally restricted to being physically located on the controlled devices, such as locomotives or other vehicles. Distributed, remote access to the computerized systems associated with the vehicles in a train, such as control systems and computer systems monitoring the various functions performed by the control systems, enhances operational aspects such as automatic train operation (ATO) when human operators are not present or available at the locomotives, monitoring and maintenance of train equipment, and collection of data provided by various sensors and other devices during operation of the locomotives, which can be used to optimize performance, efficiency, safety, and life expectancy of the equipment. The increased amount of communication of data over wireless networks may also increase the need for systems and methods to predict or monitor for any transient latency issues in the exchange of data between various remotely distributed computer systems, and maintain synchronization of the distributed systems. Implementation of the above-discussed machine learning and pattern recognition techniques according to various embodiments of this disclosure enables the prediction, early identification, and mitigation of any latency issues during the exchange of data between the various computerized systems and subsystems. Associative memory is built through “experiential” learning in which each newly observed state is accumulated in the associative memory as a basis for interpreting future events. The machine learning algorithms performed by analytics engine318assist in uncovering the patterns and sequencing of train control procedures under a large variety of operating conditions to help pinpoint the location and cause of any actual or impending failures of physical systems or computer control systems. As discussed above, train control systems that include a machine learning engine may also be configured to encode real human engineer behavior into a train control strategy engine that enables less experienced train engineers, or semi-autonomously or fully autonomously operated trains to perform optimized train handling across different terrains, with different trains, and under different operating conditions. This approach also allows for easy scalability, extensibility, or customization of train control procedures for different types of trains, different sizes of trains, different, and possibly degraded communication environments, different loads being carried by the trains, different weather conditions, different emissions and safety standards depending on geographical location, and different overall train operating goals. During normal operation, a human operator may be located on-board the lead locomotive208and within the cab of the locomotive. The human operator may be able to control when an engine or other subsystem of the train is started or shut down, which traction motors are used to propel the locomotive, what switches, handles, and other input devices are reconfigured, and when and what circuit breakers are reset or tripped. The human operator may also be required to monitor multiple gauges, indicators, sensors, and alerts while making determinations on what controls should be initiated. However, there may be times when the operator is not available to perform these functions, when the operator is not on-board the locomotive208, and/or when the operator is not sufficiently trained or alert to perform these functions. In addition, the distributed control systems according to this disclosure facilitate remote access to and availability of the locomotives in a train for authorized third parties, including providing redundancy and reliability of monitoring and control of the locomotives and subsystems on-board the locomotives. The systems and methods according to various embodiments of this disclosure also enable coordination and synchronization of operations of a lead locomotive and one or more trailing locomotives through an energy management machine learning model according to one or more shared parameters, such as any potential or actual divergence between operating models for the lead and trailing locomotives, synchronization keys, and unified time constants. A method of controlling locomotives in lead and trailing consists of a train in accordance with various aspects of this disclosure may include, for example, receiving an automatic or manually generated configuration failure signal at the off-board remote controller interface204. The configuration failure signal may be indicative of a situation at one or more of the locomotives in the train requiring configuration or reconfiguration of various operational control devices on-board the one or more locomotives. Dispatch personnel may then initiate the transmission of a configuration command signal from a remote client328, to the analytics engine318of the analytics server316, to the remote controller interface204, and to the one or more locomotives requiring reconfiguration. In this way, all of the locomotives in the lead and trailing consists of the train may be reconfigured in parallel without requiring an operator on-board the train. The configuration commands signals, like other messages communicated from the remote controller interface204, may also be transmitted only to a lead locomotive in a consist, and then communicated over a wired connection such as the network connection118to one or more trailing locomotives in the consist. As discussed above, on-board controls of the locomotives in the train may also include the energy management system232providing one or more of throttle, dynamic braking, or braking requests234to the cab electronics system238. The cab electronics system238may process and integrate these requests along with other outputs from various gauges and sensors, and commands such as the configuration command that may have been received from the off-board remote controller interface204. The cab electronics system238may then communicate commands to the on-board locomotive control system237. In parallel with these on-board communications, the cab electronics system238may communicate commands via a WiFi/cellular modem250back to the off-board remote controller interface204. In various alternative implementations, the analytics server316and off-board remote controller interface204may further process the commands received from the lead locomotive208of the lead consist or from a back office command center in order to modify the commands or otherwise interpret the commands before transmitting commands to the locomotives. Modification of the commands may be based on additional information the remote controller interface has acquired from data acquisition hub312and one or more sensors located on the locomotives, or other stored data. The commands transmitted from the remote controller interface204by dispatch personnel may be received from the remote controller interface in parallel at each of the locomotives of multiple trailing consists. In addition to throttle, dynamic braking, and braking commands, the remote controller interface204may also communicate other commands to the cab electronics systems of the on-board controllers on one or more locomotives in multiple consists. These commands may include switching a component such as a circuit breaker on-board a locomotive from a first state, in which the circuit breaker has not tripped, to a second state, in which the circuit breaker has tripped. The circuit breaker may be tripped in response to detection that an operating parameter of at least one component or subsystem of the locomotive has deviated from a predetermined range. When such a deviation occurs, a maintenance signal may be transmitted from the locomotive to the off-board remote controller interface204. The maintenance signal may be indicative of a subsystem having deviated from the predetermined range as indicated by a circuit breaker having switched from a first state to a second state. The method may further include selectively receiving a command signal from the remote controller interface204at a control device on-board the locomotive, with the command signal causing the control device to autonomously switch the component from the second state back to the first state. In the case of a tripped circuit breaker, the command may result in resetting the circuit breaker. The method of remotely controlling the locomotives in various consists of a train may also include configuring one or more programmable logic controllers (PLC) of microprocessor-based locomotive control systems237on-board one or more locomotives to selectively set predetermined ranges for operating parameters associated with various components or subsystems. In one exemplary implementation, a locomotive control system237may determine that a circuit of a particular subsystem of the associated locomotive is operating properly when the current flowing through the circuit falls within a particular range. A circuit breaker may be associated with the circuit and configured to trip when the current flowing through the circuit deviates from the determined range. In another exemplary implementation, the locomotive control system may determine that a particular flow rate of exhaust gas recirculation (EGR), or flow rate of a reductant used in exhaust gas aftertreatment, is required in order to meet particular fuel economy and/or emission levels. A valve and/or pump regulating the flow rate of exhaust gas recirculation and/or reductant may be controlled by the locomotive control system when a level of a particular pollutant deviates from a predetermined range. The predetermined ranges for various operating parameters may vary from one locomotive to another based on specific characteristics associated with each locomotive, including age, model, location, weather conditions, type of propulsion system, fuel efficiency, type of fuel, and the like. A method of controlling locomotives in lead and trailing consists of a train in accordance with various aspects of exemplary embodiments of this disclosure may include transmitting an operating control command from a lead locomotive in a lead consist of a train off-board to a remote controller interface. The remote controller interface may then relay that operating control command to one or more lead locomotives of one or more trailing consists of the train. In this way, the one or more trailing consists of the train may all respond reliably and in parallel with the same control commands that are being implemented on-board the lead locomotive of the lead consist. As discussed above, on-board controls of the lead locomotive of the lead consist in the train may include the energy management system or human operator232providing one or more of throttle, dynamic braking, or braking requests234to the cab electronics system238. The cab electronics system238may process and integrate these requests along with other outputs from various gauges and sensors, and commands that may have been received from the off-board remote controller interface204. The commands received from the off-board remote controller interface204may include commands generated manually by a user with the proper permission selecting a particular ride-through control level, or automatically based on a particular geo-fence that a locomotive is entering. The cab electronics system238may then communicate commands to the on-board locomotive control system237. In parallel with these on-board communications, the cab electronics system238may communicate the same commands via a WiFi/cellular modem250, or via a locomotive interface gateway335and WiFi/cellular modem250to the off-board remote controller interface204. In various alternative implementations, the off-board remote controller interface204may further process the commands received from the lead locomotive208of the lead consist in order to modify the commands before transmitting the commands to lead locomotives of trailing consists. Modification of the commands may be based on additional information the remote controller interface has acquired from the lead locomotives of the trailing consists, trip plans, information from maps or other stored data, and the results of machine learning, virtual system modeling, synchronization, and calibration performed by the analytics server316. The commands may be received from the remote controller interface in parallel at each of the lead locomotives248of multiple trailing consists. The method of remotely controlling the locomotives in various consists of a train may also include configuring one or more programmable logic controllers (PLC) of microprocessor-based locomotive control systems237on-board one or more lead locomotives to selectively set predetermined ranges for operating parameters associated with various components or subsystems. As discussed above, the predetermined ranges for operating parameters may be selectively set based at least in part on a manually or automatically selected ride-through control level and a geo-fence associated with the location of the locomotive. In one exemplary implementation, a locomotive control system237may determine that a circuit of a particular subsystem of the associated locomotive is operating properly when the current flowing through the circuit falls within a particular range. A circuit breaker may be associated with the circuit and configured to trip when the current flowing through the circuit deviates from the determined range. In another exemplary implementation, the locomotive control system may determine that a particular flow rate of exhaust gas recirculation (EGR), or flow rate of a reductant used in exhaust gas aftertreatment, is required in order to meet particular fuel economy and/or emission levels. A valve and/or pump regulating the flow rate of exhaust gas recirculation and/or reductant may be controlled by the locomotive control system when a level of a particular pollutant deviates from a predetermined range. The predetermined ranges for various operating parameters may vary from one locomotive to another based on specific characteristics associated with each locomotive, including age, model, location, weather conditions, type of propulsion system, fuel efficiency, type of fuel, and the like. The method of controlling locomotives in a train in accordance with various implementations of this disclosure may still further include the cab electronics system238on-board a locomotive receiving and processing data outputs from one or more of gauges, indicators, sensors, and controls on-board the locomotive. The cab electronics system238may also receive and process, e.g., throttle, dynamic braking, and pneumatic braking requests from the energy management system and/or human operator232on-board the locomotive, and command signals from the off-board remote controller interface204. The command signals received from off-board the locomotive, or generated on-board the locomotive may be determined at least in part by a selected ride-through control level and the particular geo-fence associated with the current location of the train. The cab electronics system238may then communicate appropriate commands to the locomotive control system237and/or electronic air brake system236based on the requests, data outputs and command signals. The locomotive control system237may perform various control operations such as resetting circuit breakers, adjusting throttle settings, activating dynamic braking, and activating pneumatic braking in accordance with the commands received from the cab electronics system238. It will be apparent to those skilled in the art that various modifications and variations can be made to the systems and methods of the present disclosure without departing from the scope of the disclosure. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the system disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents. | 110,611 |
11858544 | DETAILED DESCRIPTION In order to describe the technical content and structural features of the disclosure in detail, the following illustration is provided conjunction with the embodiments and the accompanying drawings. Referring toFIG.1, a backrest angle adjusting mechanism100and an infant carrier200having the backrest angle adjusting mechanism100are provided in this disclosure. The infant carrier200comprises, but is not limited to, e.g., a stroller, a carrycot, a bed, a baby basket, a safety seat and the like. Specifically, in the preferred embodiments of the present application, the infant carrier200of the disclosure may be a stroller comprising a frame201, a seat body202connected to the frame201, a backrest203pivoted to the seat body202, and a backrest angle adjusting mechanism100connected between the backrest203and the seat body202or between the backrest203and the frame201. Moreover, a front wheel204and rear wheels205are disposed on front and rear sides of the frame201respectively. The backrest203can be rotated by the backrest angle adjusting mechanism100in respect to the seat body202, so as to adjust an inclination angle of the seat body202to meet different usage requirements. According to the preferred embodiments of the disclosure, the backrest angle adjusting mechanism100may cause the backrest203to rotate in a front and rear direction on the seat body202, such that the backrest203may be in a lying state relatively flush with the seat body202or in a vertical upright state in respect to the seat body202, thereby facilitating a baby or infant lying down or sitting in the infant carrier200.FIG.1shows the backrest203in the lying state. Of course, the backrest203may also be adjusted to an inclined state between the lying state and the vertical upright state. Referring toFIGS.1to3, the backrest angle adjusting mechanism100of the disclosure comprises a fixing member10, an adjustment member20, and a connecting member30. One end of the connecting member30is connected to the backrest203, and the other end of the connecting member30is connected to the fixing member10, and the connecting member30has a certain thickness. One end of the adjustment member20is connected to the seat body202or the frame201to form a connecting part21, and the other end of the adjustment member20is passed around (bypasses round) the fixing member10to form an operating part22. The adjustment member20may be driven by operating of the operating part22to slide on the fixing member10. The backrest203may be brought, by sliding of the adjustment member20, to rotate in respect to the seat body202, so as to adjust the inclination angle of the backrest203. Specifically, connecting member30has an elasticity so that it has a function of cushioning and shock absorbing, which can reduce an impact force during sitting or during angle adjusting of the backrest203, and effectively improve the comfort of use. Preferably, the adjustment member20may be a rope, belt, or cable, which has a simple structure and is easy to operate, and has a low cost, thereby effectively reducing the production cost. Referring bothFIGS.2and3, the adjustment member20comprises a first adjusting section20apositioned between the connecting part21and the fixing member10and a second adjusting section20bpositioned between the fixing member10and the operating part22. Since the connecting part21of the adjustment member20is fixed to the seat body202or frame201, and the overall length of the adjustment member20remains certain, when the adjusting member20slides on the fixing member10, the length of the first adjusting section20aand the length of the second adjusting section20bare changed in inverse proportion, and the inclination angle of the backrest203may be adjusted by changing the length ratio between the two adjusting sections. In details, if the first adjusting section20ais changed from short to long, the backrest203will be adjusted from the vertical upright state toward the lying state, and if the first adjusting section20ais changed from long to short, then the backrest202will be adjusted from the lying state toward the vertical upright state. Furthermore, since the connecting member30has a certain thickness, the spacing between the fixing member10and the backrest203can be effectively increased, and the spacing between the adjustment member20and the backrest203can be increased as well. When the adjustment member20is tightened, the backrest203may be adjusted by a greatest extent to a more upright position in respect to the seat body202. In addition, the arrangement that increases the spacing between the fixing member10and the backrest203may also make the adjustment operation of the adjustment member20smoother. Referring bothFIGS.2and3, in order to place the connecting member30between the backrest203and the fixing member10, specifically, in the preferred embodiments of the present application, the connecting member30may be fixed to the backrest203, and the fixing member10may be detachably installed on the connecting member30. Of course, the connecting member30may be arranged to abut on between the backrest203and the fixing member10, then the fixing member10is detachably fixed to the backrest203after passing through the connecting member30. By either way mentioned above, the fixing member10and the backrest203may be simply and easily assembled with certain spacing between them. Preferably, in order to simplify structure, the connecting member30and the fixing member10may also form an integral structure. Specifically, the connecting member30and the fixing member10may be fixedly connected into one body at first, and then the connecting member30may be detachably installed on the backrest203. Further, the connecting member30and the fixing member10themselves may be a whole and one face of the fixing member10opposite to the backrest203may be extended in a direction near the backrest203, so as to form the connecting member30with a certain thickness. At this time, the connecting member30is a part of the fixing member10, and it is sufficient to fix the fixing member10on the backrest203as a whole. Referring toFIG.2, the connecting member30is disposed on an opposite upper side of the back of the backrest203. The fixing member10and the connecting member30may be arranged in a one-to-one correspondence relationship, or one fixing member10may be fixed to the backrest203by at least two connecting members30, both of which can increase the distance between the fixing member10and the backrest203. On this basis, the adjustment member20and the fixing member10may also be arranged in a one-to-one correspondence relationship, or, two adjusting members20are symmetrically arranged on one fixing member10. Moreover, the connecting member30has a profile of block or column. As shown inFIG.4, the connecting member30is in the shape of a plate as a whole, such that an area of a connecting surface between the fixing member10and the connecting member30is relatively large, and the connecting member30and the fixing member10are connected more firmly to each other, thereby strengthening the connection between the fixing member10and the backrest203. As shown inFIG.5, the connecting member30is cylindrical, and the fixing member10may be fixed to the backrest203by a plurality of connecting members30arranged separately, so, by multi-point fixing, in one hand, the connection between the fixing member10and the backrest203may be strengthened, and in the other hand, the adhesiveness between the backrests203in different mounting faces may be improved effectively. Specifically, in the preferred embodiments of the present application, when both the number of the fixing member10and the number of the connecting member30are one, the connecting member30may be fixed at a center of an upper side of the backrest203, and the fixing member10is fixed to the connecting member30. The number of the adjustment member20is one, and its head and tail ends are respectively connected to both sides of the seat body202or the frame201to form two connecting parts21. A center end of the adjustment member20is passed around the fixing member10to form an annular-shaped operating part22. Then, by pulling the operating part22with one hand, adjustment operation of the backrest203can be performed, and the structure is simple and the operation is convenient. Of course, if both the number of the fixing member10and the number of the connecting member30are one, the number of the adjustment member20would be set to two, and the two adjustment members20may be symmetrically wound on the fixing member10. Then, the connecting parts21of the two adjustment members20are symmetrically connected to both sides of the seat body202or the frame201, and the operating parts22of the two adjustment members20are symmetrically passed around the fixing member10, so by pulling the two operating parts22, the adjustment of the backrest203may also be performed. Referring bothFIGS.2and3, in the preferred embodiments of the present application, in order to ensure the stability and smoothness of the adjustment of the backrest203, the number of connecting member30, the number of the fixing member10and the number of the adjustment member20are correspondingly set to two. The two connecting members30are arranged symmetrically at left and right ends of the relative upper side of the backrest203. The two fixing members10are fixed to the two connecting members30in one-to-one correspondence relationship with each of the fixing members10being wound by one adjustment member20, and the connecting parts21of the two adjustment members20are symmetrically connected to both sides of the seat body202or the frame201, and the operating parts22of the two adjustment member20are symmetrically passed around the fixing member10, such that force applied on the backrest203may be distributed more even when the backrest203is adjusted, and the adjustment of the backrest203is more stable and smooth. It would be noted, if the number of the fixing member10is two, the number of the adjusting member20may also be one. After a head end of one the adjustment member20is connected to one side of the seat body202or one side of the frame201to form one connecting part21, a tail end of the adjustment member20bypasses round the two fixing members10in turn, and then is connected to the opposite side of the seat body202or frame201to form the other connecting part21, thereby forming an operating part22between the two fixing members10. Then, by pulling the operating part22with one hand, the adjustment operation of the backrest203may be realized, so the adjustment of the backrest203is stable and convenient. In addition, the number of the fixing member10may be more than one, and a plurality of the fixing member10are arranged on the backrest203in parallel and spaced apart, and the plurality of the adjustment members20are wound on the plurality of fixing members10in one-to-one correspondence relationship. Moreover, the plurality of the adjustment members20are also respectively connected to the seat body202or the frame201. So, by arranging multiple pulling points, force applied on the entire backrest203may be distributed more even, such that the adjustment is more stable and smooth, and further improves use comfort and safety of the infant carrier200. Referring toFIG.2, the backrest angle adjusting mechanism100of the disclosure further comprises a locking member40. The adjusting member20bypasses round the fixing member10and then is inserted and connected to the locking member40. The locking member40is used to lock the first adjusting section20aand the second adjusting section20bat a position after a length adjustment. It would be noted, specifically, this embodiment allows more than two adjustment members20to pass through one locking member40, thereby simplifying the structure, realizing synchronous pulling operations, and making adjustments easier. Of course, in other embodiments, the locking member40and the adjusting member20may also be arranged in a one-to-one correspondence relationship. It would be noted, the structure of the locking member40in this embodiment could be implemented by a conventional structure, which can make the adjustment member20being unlocked and locked, and will not be described in detail here. Of course, if the locking member20is not provided, the disclosure may also perform the adjustment of the backrest203by providing a clamp-fitting structure which fitted by clamping with each other on the adjusting member20and the fixing member10. In addition, locking may be further simplified by directly pulling the adjustment member20and then knotting/positioning it. Referring toFIGS.3to5, in order to facilitate inserting and sliding of the adjusting member20, a guide groove10ais disposed on the fixing member10. The adjustment member20is inserted in the guide groove10aand may slide in the guide groove10a. The guide groove10amay be arranged in a direction parallel or perpendicular to a longitudinal direction of the backrest203. Preferably, guide groove10ais an arc-shaped groove. Specifically, the fixing member10has a positioning part11and a guiding part12which are connected to each other. The positioning part11is used to be fixedly connected to the backrest203or the connecting member30, specifically, fixedly connected by thread. The guide groove10ais disposed on the guiding part12, and the guiding part12and the positioning part11are arranged parallel with or perpendicular to each other, such that the guiding groove10ais arranged in a direction parallel with or perpendicular to the longitudinal direction of the backrest203. Preferably, the positioning part11and the guiding part12are an integral structure, so that on the basis of simplifying the structure, the assembly process is optimized, and the manufacturing cost is further reduced. Referring toFIGS.3to5, the fixing member10further comprises an anti-dropping member13. The anti-dropping member13is disposed in the guide groove10a, in order to prevent the adjustment member20slidably inserted in the guide groove10afrom being separated from the guide groove10a. Specifically, the anti-dropping member13is located at a relative center of the guide groove10aand protrudes from an opening of the guide groove10a. A through hole13acommunicating with the guide groove10ais disposed in the anti-dropping member13. Then the adjustment member20may penetrate into the guiding groove10afrom one side of the guiding groove10aand pass through the through hole13aand then pass out of the other side of the guiding groove10a, so as to effectively prevent the adjusting member20from leaving the guiding groove10aduring the pulling process. Preferably, the anti-dropping member13may be formed by protruding and buckling of both groove walls of the guide groove10ain a direction away from the opening, and the arrangement that the anti-dropping member13and the guide groove10abeing integrated may further simplify the structure. Referring toFIGS.3to5, the fixing member10further comprises a limiting member14. The limiting member14is disposed in the guide groove10afor limiting a sliding direction of the adjustment member20in the guide groove10a. Moreover, the limiting member14may be disposed at an outlet end of the guide groove10a, so as to limit an angle of the adjustment member20when it passes out, so as to make an adjustment. Of course, the limiting member14may also be disposed at an inlet end of the guide groove10a, so as to facilitate the penetration of the adjustment member20. Specifically, the limiting member14is flush with the direction of the opening of the guide groove10a, and the limiting member14and a bottom wall of the guide groove10aare arranged in parallel and spaced apart, so as to limit sliding of the adjustment member20which is inserted between the bottom wall and the limiting member14, thereby keeping the adjustment member20always sliding in the direction of the guide groove10a. When the operating part22of the adjustment member20is operated to cause the adjustment member20to slide on the fixing member10, thereby driving the backrest203to rotate in respect to the seat body202. When the backrest203rotates to bring the fixing member10to rotate so that its limiting member14and the connecting part21of the adjustment member20are relatively flush, since the fixing member10and the backrest203have a certain distance at this time, the backrest203may be adjusted to a position more upright in respect to the seat body202. It would be noted, in the prior art, the backrest generally can only be adjusted to a position where it is flush with the pivot point of the seat body202, and often the backrest and the seat body are not completely perpendicular in respect each other. However, in the present application, since the backrest203is located on a relative front side of the fixing member10, and there is a certain distance between the two parts, when the limiting member14of the fixing member10rotates to be relatively flush with the connecting part21, the backrest203is actually inclined more forward, and the backrest203of the present application is more upright respect to the seat body202. Compared with the related art, the backrest angle adjusting mechanism100of the disclosure comprises a connecting member30, a fixing member10and an adjustment member20. The fixing member10is fixed to a backrest203via the connecting member30. One end of the adjustment member20is connected to a seat body202or a frame201to form a connecting part21, and the other end of the adjustment member20is passed around the fixing member10to form an operating part22. When the adjustment member20is operated to slide on the fixing member10, sliding of the adjustment member20may bring the backrest203to rotate in respect to the seat body202, thereby adjusting an inclination angle of the backrest203to meet different usage requirements. Furthermore, since the connecting member30has a certain thickness, a spacing between the fixing member10and the backrest203may be effectively increased, and a spacing between the adjustment member20and the backrest203is also increased at same time. When the adjusting member20is tightened, the backrest203may be inclined more forward, and thusly more upright in respect to the seat body202, which also makes the adjustment operation of the adjustment member20smoother. The backrest angle adjusting mechanism100of the disclosure is simple in structure and convenient in operation, and can effectively reduce manufacturing costs, so that the backrest203may be smoothly switched between a reliable vertical upright state and a supportable lying state, thereby effectively ensuring the comfort and safety of the baby or infant in an infant carrier200having the backrest angle adjusting mechanism100. What disclosed above are only preferred embodiments of the disclosure, and the scope of the disclosure certainly cannot be limited by this. Therefore, any equivalent changes made according to the scope of the disclosure still belong to the disclosure. | 19,090 |
11858545 | DETAILED DESCRIPTION The embodiments disclosed in the above drawings and the following detailed description are not intended to be exhaustive or to limit the disclosure to these embodiments. Rather, there are several variations and modifications which may be made without departing from the scope of the present disclosure. FIG.1shows part of a support structure10of an agricultural vehicle12, not illustrated in detail here, with a holding frame14for a front power lift. The support structure10has, inter alia, a front structural section16and a rear structural section20that is opposite in the longitudinal direction18of the vehicle. The front structural section16serves for receiving a front axle assembly22. A support part24, as part of the support structure10, is arranged between the front structural section16and the rear structural section20along the longitudinal direction18of the vehicle. A support wall26of the support part24is self-contained transversely with respect to the longitudinal direction18of the vehicle, i.e. along a circumferential direction28. The torsional rigidity and stability of the support structure10can thereby be supported. It is illustrated with reference toFIG.2that—independently of the actual detailed configuration of the support part24—the profile of the support wall26, that is self-contained in the circumferential direction28, forms a cross section A. The cross section A customarily lies in a cross-sectional plane arranged approximately perpendicularly to the longitudinal direction18of the vehicle. The cross-sectional plane is therefore spanned by a conventionally approximately horizontally running transverse direction30and an approximately vertically running vertical direction32of the vehicle12. The support wall26can have a self-contained wall profile along the entire extent in the longitudinal direction18of the vehicle or else in sections along the longitudinal direction18. In a departure from the configuration of the support part24that is merely indicated schematically inFIG.2, it is illustrated there that the cross section A is dimensioned differently along the longitudinal direction18of the vehicle. For example, an initially square cross section A1can merge into an oval cross section A2, then into a circular cross section A3and subsequently into an approximately horseshoe-shaped cross section A4. The cross section A under consideration can relate to an outer wall34or to an inner wall36of the support wall26. Furthermore, it can be seen inFIG.2that the support part26has a flange-like front fastening intersection38in order to connect the support part26to the front structural section16. The support part26analogously also has a flange-like rear fastening intersection40in order to connect the support part26to the rear structural section20. As illustrated inFIG.2, the two fastening intersections38,40can basically be configured differently. The connections between the fastening intersections38,40and the structural sections16,20are preferably rigid. For this purpose, connecting means which have yet to be described are provided with reference toFIG.3atoFIG.3e. The connecting means, depending on the configuration, can produce a releasable or non-releasable connection between the fastening intersections38,40and the structural sections16,20. The connecting means pass, for example, through holes42on the fastening intersections38,40. Such connecting means are illustrated inFIG.3atoFIG.3e. For example, they are clamping elements44in the form of screw bolts which pass through the through holes42and bring about bracing between the front structural section16and the support part24. The clamping force direction acts here, for example, substantially horizontally in the longitudinal direction18of the vehicle (FIG.3a,FIG.3c,FIG.3d) or at a horizontal angle (for example approximately 30° to 45°) with respect to the longitudinal direction18of the vehicle (FIG.3b). The aforementioned connection can also be supported by connecting bolts46which pass through the front structural section16and the fastening intersection38of the support part24in the transverse direction30(FIG.3d). Alternatively, the connection between the front structural section16and the support part24can also be realized by means of a plurality of connecting bolts46without the clamping elements44aligned in the longitudinal direction18of the vehicle being used (FIG.3e). Nevertheless, the connecting bolts46if configured correspondingly (for example as screw bolts) can also bring about a clamping force which is then aligned in the transverse direction30. The connecting techniques disclosed here between the front structural section16and the support part24can in principle also be used at the fastening intersection40for connecting the rear structural section20to the support part24. For adaptation of the support part24to differently dimensioned support structures10, spaces48can be provided (FIG.3a) which can be arranged between the support part24and the associated structural section16,20. Two extension arms50which have fixing holes52(for example internal screw threads) which are aligned in the vertical direction32can be seen inFIG.4. The extension arms50are connected preferably integrally to the support wall26. Further extension arms50and optionally also in a different configuration can be arranged on the support wall26, as is illustrated, for example, inFIG.1. The extension arms50, together with suitable fastening means, serve to mount a vehicle component54or at least a part of the vehicle component54on the support part24. The vehicle component54(for example radiator unit or motor or engine unit) can be mounted as a complete block on the support part24. Alternatively, the vehicle component54can consist of a plurality of parts, of which first of all one part is mounted on the support part24, as is illustrated inFIG.1by way of example with reference to an oil sump56as part of an internal combustion engine unit. In a further embodiment, one part of the vehicle component54is fixedly connected from the outset to the support part24, in particular is produced integrally therewith, as is illustrated inFIG.5with reference to the component wall58. The latter bounds a cavity which is effective as an oil sump for a motor/engine unit60as the vehicle component54(FIG.6). For the correct securing or installation of the motor/engine unit60, the part62, which is assigned to the cylinders thereof, can be secured on an installation intersection64—optionally with the interposition of damper elements and/or sealing elements—of the component wall58. For this purpose, the component wall58has a multiplicity of securing holes66for, for example, a screw connection to the motor/engine part62. The support wall26of the support part24bounds a receiving channel68along the circumferential direction28for receiving a drive shaft (not illustrated here) of a drive train of the vehicle12. FIG.6, in comparison toFIG.1, illustrates a further embodiment of the front structural section16. In this embodiment, the structural section16has a grid-like structure with a plurality of structural struts70. In the fitted state of the support structure10, the structural struts70are preferably substantially arranged in one of the directions18,30,32. They bound a receiving space72, which is accessible in the transverse direction30of the vehicle, for receiving the front axle assembly22. The terminology used herein is for describing particular embodiments and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the any use of the terms “has,” “includes,” “comprises,” or the like, in this specification, identifies the presence of stated features, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. One or more of the steps or operations in any of the methods, processes, or systems discussed herein may be omitted, repeated, re-ordered, combined, or separated and are within the scope of the present disclosure. As used herein, “e.g.” is utilized to non-exhaustively list examples and carries the same meaning as alternative illustrative phrases such as “including,” “including, but not limited to,” and “including without limitation.” Unless otherwise limited or modified, lists with elements that are separated by conjunctive terms (e.g., “and”) and that are also preceded by the phrase “one or more of” or “at least one of” indicate configurations or arrangements that potentially include individual elements of the list, or any combination thereof. For example, “at least one of A, B, and C” or “one or more of A, B, and C” indicates the possibilities of only A, only B, only C, or any combination of two or more of A, B, and C (e.g., A and B; B and C; A and C; or A, B, and C). While the above describes example embodiments of the present disclosure, these descriptions should not be viewed in a restrictive or limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the appended claims. | 9,365 |
11858546 | DETAILED DESCRIPTION As is traditional in the corresponding field, some exemplary embodiments may be illustrated in the drawings in terms of functional blocks, units, and/or modules. Those of ordinary skill in the art will appreciate that these block, units, and/or modules are physically implemented by electronic (or optical) circuits such as logic circuits, discrete components, processors, hard-wired circuits, memory elements, wiring connections, and the like. When the blocks, units, and/or modules are implemented by processors or similar hardware, they may be programmed and controlled using software (e.g., code) to perform various functions discussed herein. Alternatively, each block, unit, and/or module may be implemented by dedicated hardware or as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed processors and associated circuitry) to perform other functions. Each block, unit, and/or module of some exemplary embodiments may be physically separated into two or more interacting and discrete blocks, units, and/or modules without departing from the scope of the inventive concept. Further, blocks, units, and/or module of some exemplary embodiments may be physically combined into more complex blocks, units, and/or modules without departing from the scope of the inventive concept. Hereinafter, an apparatus and a method for adjusting a steering wheel will be described below with reference to the accompanying drawings through various exemplary embodiments. For clarity and convenience in description, thicknesses of lines, sizes of constituent elements and the like may be illustrated in an exaggerated manner in the drawings. In addition, terms described below are defined by considering functions according to the present disclosure and may vary according to the intention of a user or a manager or according to the common practices in the art. Therefore, definitions of these terms should be defined in light of details disclosed throughout the present specification. FIG.1is an exemplary diagram illustrating a schematic configuration of an apparatus for adjusting a steering wheel according to an embodiment of the present disclosure. As illustrated inFIG.1, the apparatus for adjusting a steering wheel according to the present embodiment includes a camera unit110, a control unit120, and a steering wheel adjustment information output unit130. The camera unit110is installed (disposed or mounted) at the center of the upper end of the cover of a steering column inside the vehicle. The camera unit110includes a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) type digital camera, or an in-cabin camera (including In-Cabin Camera or ICC camera). The control unit120detects a face by processing an image captured by the camera unit110. That is, the control unit120processes the image captured by the camera unit110to detect a shape of the face (that is, eyes, nose, mouth, forehead, chin or the like). For reference, in the present embodiment, since the camera unit110is installed (mounted) at the center of the upper end of the cover of the steering column inside the vehicle, the face is not biased to any one of left and right sides of the screen (including a partial deflection error) during capturing or photographing. In addition, the control unit120checks whether predetermined portions (e.g., a forehead and chin) of the face detected in the captured image are in an optimal face shape (that is, for example, eyes, nose, mouth, forehead, and chin forming the face shape are included in the captured image in their entirety without any of the portions being partially or entirely covered or cut off). In addition, as the result of checking whether a user's face is photographed in the optimal face shape, the control unit120outputs information for lifting the steering wheel when a forehead portion of the face is cut off or covered and then photographed. In addition, as the result of checking whether the user's face is photographed in the optimal face shape, the control unit120outputs information for lowering the steering wheel when a chin portion of the face is cut off or covered and then photographed. That is, in the present embodiment, since the camera unit110is fixed to the center of the upper end of the cover of the steering column inside the vehicle, a height of the steering wheel is adjusted to photograph the optimal face shape (that is, eyes, nose, mouth, forehead, and chin forming the face shape are all photographed within the screen without being covered or cut off). Meanwhile, the control unit120outputs a signal (for example: good signal) indicating that the height of the steering wheel is in an appropriate (or optimal) state when the user's face is photographed in the optimal face shape (that is, eyes, nose, mouth, forehead, and chin forming the face shape are all photographed within the screen without being covered or cut off) as the result of checking whether the user's face is photographed in the optimal face shape. The steering wheel adjustment information output unit130converts steering wheel adjustment information (i.e., adjustment information) output based on the result of checking whether the user's face is photographed in the optimal face shape by the control unit120into an appropriate signal form (i.e., an adjustment signal instructing or causing the steering wheel to be lifted or lowered) to a designated output target apparatus (for example: a steering wheel driving unit140and an information output unit150) and outputs the converted signal. Herein, the output target apparatus (for example: the steering wheel driving unit140and the information output unit150) is an apparatus disposed at or installed in a target vehicle and may include at least one of the steering wheel driving units140which may automatically adjust the height of the steering wheel by an electric motor or the information output unit150which may output audio, video, and navigation information according to a vehicle type. For reference, the information output unit150is basically mounted on a recently released vehicle (including an autonomous vehicle). Accordingly, the steering wheel adjustment information output unit130converts the steering wheel adjustment information into the appropriate signal form to the designated output target apparatus (for example: the steering wheel driving unit140and the information output unit150) in response to the output target apparatus (for example: the steering wheel driving unit140and the steering wheel driving unit150) included (mounted) in the target vehicle. FIG.2is an exemplary diagram illustrating a state in which the steering wheel adjustment information output unit outputs the steering wheel adjustment information through the information output unit inFIG.1. As shown inFIG.2A, information (for example: GOOD, HIGH, and LOW) may be displayed in connection with the face shape photographed by the camera and the height to be adjusted (up-down direction), or as shown inFIG.2B, whether a level of the steering wheel is HIGH, LOW, or appropriate (GOOD) may be converted into a gauge bar shape and then outputted. However, this is illustrated to help understanding and is not limited thereto. In this case, the control unit120may communicate with an electronic control unit (ECU) (not illustrated) of a vehicle to check whether the output target apparatus (for example: the steering wheel driving unit140and the information output unit150) is mounted. FIG.3is a flowchart illustrating a method of adjusting a steering wheel according to an embodiment of the present disclosure.FIG.4is a flowchart illustrating more detailed operations for outputting the steering wheel adjustment information shown inFIG.2.FIG.5is an exemplary diagram illustrating a steering wheel adjustment information output according to a face shape inFIG.4. Referring toFIG.3, the control unit120receives an image captured by the camera unit110(S101). In addition, the control unit120processes the received image to detect the face shape (that is, eyes, nose, mouth, forehead, chin) (S102). In addition, the control unit120checks whether a detected face shape is the optimal face shape in which upper and lower portions (that is, forehead and chin) of the face are not cut off (that is, eyes, nose, mouth, forehead, and chin forming the face shape are all photographed within the screen without being covered or cut off) (S103). When the detected face shape is not the optimal face shape in which upper and lower portions (that is, forehead and chin) of the face are not cut off (that is, eyes, nose, mouth, forehead, and chin forming the face shape are all photographed within the screen without being covered or cut off) as the result of the check S103, the designated steering wheel adjustment information is output to the designated output target apparatus (for example: the steering wheel driving unit140and the information output unit150) (S104). A more detailed method for an operation S104of outputting the steering wheel adjustment information to the designated output target apparatus (for example: the steering wheel driving unit140and the information output unit150) will be described with reference toFIGS.4and5. Referring toFIG.4, as the result of checking whether the user's face is photographed in the optimal face shape, when the forehead portion of the face is cut off or covered not to be photographed (that is, when an eye position is higher than a designated upper level (for example: level 8)) (Yes of S201), the control unit120outputs the steering wheel adjustment information for lifting the steering wheel (S202) (seeFIG.5A). The processes S201and S202are repeatedly performed until the forehead portion of the face for being cut off or covered is not photographed (that is, until the eye position is lower than or equal to the designated upper level (for example: level 8)) (No of S201). When the forehead portion of the face is cut off or covered and is not photographed through the processes S201and S202(that is, when the eye position is lower than or equal to the designated high level (for example: level 8), (No of S201), as the result of checking whether the user's face is photographed in the optimal face shape, when the chin portion of the face for being cut off or covered is not photographed (that is, when the eye position is not between the upper and lower level) (No of S203), the control unit120outputs steering wheel adjustment information for lowering the steering wheel (S204) (seeFIG.5C). The processes S203and S204are repeatedly performed until the chin portion of the face for being cut off or covered is not photographed (that is, until the eye position is between the designated upper (for example: level 8) and the lower level (for example: level 4) (No of S201). When the chin portion of the face for being cut off or covered is not photographed through the processes S203and S204(that is, when the eye position is between the designated upper (for example: level 8) and the lower level (for example: level 4)) (Yes of S203), the control unit120outputs a signal (for example: Good signal) (that is, face optimum position determination signal) indicating that the height of the steering wheel is in an appropriate (or optimal) state (S205) (seeFIG.5B). Accordingly, when the steering wheel driving unit140is included (mounted) in the target vehicle, the steering wheel adjustment information output unit130directly outputs the steering wheel adjustment information to the steering wheel driving unit140so as to automatically adjust the height of the steering wheel. In addition, when only the information output unit150is included (mounted) in the target vehicle, the steering wheel adjustment information output unit130outputs the steering wheel adjustment information to the information output unit150as illustrated inFIG.2, and thus the user (driver) may adjust the height of the steering wheel in a manual manner. As described above, when the driver's face is monitored through a camera (including In-Cabin Camera) installed inside the vehicle, the steering wheel can be adjusted according to an exemplary of the embodiment so that an entire face optimally photographs without cutting the forehead or chin portion of the driver's face. Thus, there is an effect of allowing the driver monitoring system to operate stably. Although exemplary embodiments of the disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the disclosure as defined in the accompanying claims. Thus, the true technical scope of the disclosure should be defined by the following claims. Furthermore, the implementation described above in the present specification may be performed by, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Although being discussed only in the context of single-form implementation (for example, being discussed only as a method), the discussed features may be implemented even as another form (for example, apparatus or program). The apparatus may be implemented as proper hardware, software, and firmware. The method may be implemented as, for example, an apparatus, such as a processor generally indicating a processing apparatus including a computer, a microprocessor, an integrated circuit, or a programmable logic apparatus. The processor also includes a communication apparatus, such as a computer, a cellular phone, a portable/personal digital assistant (PDA), and other devices which facilitate information communication between end users. | 13,819 |
11858547 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Certain terminology is used in the following description for convenience only and is not limiting. The words “front,” “rear,” “upper” and “lower” designate directions in the drawings to which reference is made. The words “inwardly” and “outwardly” refer to directions toward and away from the parts referenced in the drawings. “Axially” refers to a direction along the axis of a shaft. A reference to a list of items that are cited as “at least one of a, b, or c” (where a, b, and c represent the items being listed) means any single one of the items a, b, or c, or combinations thereof. The terminology includes the words specifically noted above, derivatives thereof and words of similar import. As shown inFIGS.1-3, an intermediate shaft assembly10for a steering column is generally disclosed herein. The assembly10comprises a first shaft20defining a cavity22and a first bearing raceway24. The first bearing raceway24can be an external or radially outer raceway. In one aspect, the first shaft20is an external tube, and can be configured to be connected to steering column box, as shown inFIG.6. A second shaft30is also provided that is arranged at least partially within the cavity22of the first shaft20. The second shaft30is configured to be connected to a steering wheel assembly15, as shown inFIG.6. One of ordinary skill in the art would understand that the second shaft30can be configured to be connected to any other steering column assembly component. The second shaft30can be configured to be connected to the upper steering column cross piece, which is directly connected to the steering wheel assembly15. A sleeve40is arranged on an axial end32of the second shaft30and defines a second bearing raceway42. The second bearing raceway42can be an internal raceway or radially inner raceway. The sleeve40is heat treated, in one aspect. The sleeve40can be formed from sheet metal, in one aspect. The sleeve40is secured to the axial end32of the second shaft30by an interference fit or direct rotational connection, in one aspect. As used in this context, the term interference fit can refer to a frictional fit or a rotationally locking interference fit. The sleeve40can be formed from steel, in one aspect. The sleeve can be formed from a steel tube whereby the internal raceway is formed by stamping or rolling processes. A bearing assembly50is also provided that includes at least two rows of rolling elements52and a cage54configured to retain the rolling elements52. The rolling elements52are supported between the first bearing raceway24of the first shaft20and the second bearing raceway42of the sleeve40. The bearing assembly generally allows the first and second shafts20,30to be adjusted in an axial direction relative to each other. This configuration can also for oscillations in the axial direction during driving. Due to the groove profiles of the first bearing raceway24and the second bearing raceway42and the cage54, the rolling elements52are circumferentially retained and do not allow for rotational motion between the first and second shafts20,30. In one aspect, there are two sets of the first bearing raceway24and the second bearing raceway42. One of ordinary skill in the art would understand that a single raceway could be provided or multiple raceways could be implemented in the assembly. In one aspect, the assembly10further includes a securing element60arranged on an axial end41of the sleeve40configured to retain the bearing assembly on the sleeve40. The securing element60can include a snap ring. One of ordinary skill in the art would understand that other types of securing elements could be used, such as pins, latches, clips, flanges, etc. At an opposite end, a shoulder31formed on the second shaft30can define an axial stop for the bearing assembly50. In one aspect, the sleeve40is rotationally fixed to the axial end32of the second shaft30via a coupling feature, which can include surface profiles defined by the second shaft30and the sleeve40or a separate locking element. As shown in more detail inFIG.3, the interference fit or direct rotational connection between the second shaft30and the sleeve40can be formed via at least one first curved section34a,34bformed on the second shaft30that is configured to mate with at least one second curved section44a,44bformed on the sleeve40. Circumferential curved sections34c,34dand circumferential curved sections44c,44dcan connect the respective first curved sections34a,34band the second curved sections44a,44b. The circumferential curved sections34c,34d,44c,44deach have a continuously curved profile. The first curved section34a,34band the second curved section44a,44beach can include an indented curved profile that generally extends radially inwards. One of ordinary skill in the art would understand based on the present disclosure that various types of circumferentially interlocking, non-rotatable interfaces could be provided between the second shaft30and the sleeve40. As shown inFIG.4, the interference fit or direct rotational connection between the second shaft130and the sleeve140can be formed via at least one first flat section134a,134bformed on the second shaft130that is configured to mate with at least one second flat section144a,144bformed on the sleeve140. Continuously curved circumferential sections134c,134dcan connect flat sections134a,134bof the second shaft130, and a corresponding pair of continuously curved circumferential sections144c,144dcan connect flat sections144a,144bof the sleeve140. As shown inFIG.5, in another aspect, a radially inner surface of the sleeve240can be entirely circular and a radially outer surface of the second shaft230can also entirely circular. In this aspect, a separately formed locking element70can be provided that is configured to secure the sleeve240to the second shaft230. The locking element70can extend between the second shaft230and the sleeve240. In one aspect, the locking element70can be a pin configured to extend between openings in the second shaft230and the sleeve240. The locking element70can include any element, piece, or component that generally can be used to prevent relative movement between the second shaft230and the sleeve240. In one aspect, the locking element is provided in an axial region between the securing element60and the bearing assembly50. One of ordinary skill in the art would understand that the exact location of the locking element70can vary. As shown inFIG.6, the intermediate shaft assembly10′,10″ is shown in two different positions. The intermediate shaft assembly10′ on the left corresponds to a tilted cabin position, while the intermediate shaft assembly10″ on the right corresponds to a normal cabin position. A terminal end of the second shaft is configured to connect to a steering wheel assembly15. One of ordinary skill in the art would understand that the interface or mating feature between the sleeve40,140,240and the second shaft30,130,230can vary and be achieved in a variety of different configurations. As disclosed herein, the sleeve40,140,240is provided to define the raceway of the rolling elements52. This configuration provides manufacturing efficiencies in that only the sleeve40,140,240is heat treated, in one aspect, to provide sufficient hardness for the rolling element raceway defined thereon. This configuration avoids requiring a heat treatment for the entire second shaft30,130,230and instead limits the process to the sleeve40,140,240. A method of assembling an intermediate shaft assembly10is also disclosed. The method includes providing a first shaft20defining a cavity22and a first bearing raceway24, and providing a second shaft30. The method includes fixing a sleeve40onto an axial end32of the second shaft30, and the sleeve40defines a second bearing raceway42. The method also includes providing a bearing assembly50including at least two rows of rolling elements52and a cage54, and arranging the bearing assembly50around the second bearing raceway42of the sleeve40. The method includes inserting the second shaft30, the sleeve40, and the bearing assembly50at least partially within the cavity22of the first shaft20. Additional steps for the method may be included, such as installing a securing element60or locking element70. Having thus described the present embodiments in detail, it is to be appreciated and will be apparent to those skilled in the art that many physical changes, only a few of which are exemplified in the detailed description of the disclosure, could be made without altering the inventive concepts and principles embodied therein. It is also to be appreciated that numerous embodiments incorporating only part of the preferred embodiment are possible which do not alter, with respect to those parts, the inventive concepts and principles embodied therein. The present embodiment and optional configurations are therefore to be considered in all respects as exemplary and/or illustrative and not restrictive, the scope of the disclosure being indicated by the appended claims rather than by the foregoing description, and all alternate embodiments and changes to this embodiment which come within the meaning and range of equivalency of said claims are therefore to be embraced therein. LOG OF REFERENCE NUMERALS intermediate shaft assembly10steering wheel assembly15first shaft20cavity22first bearing raceway24second shaft30,130,230shoulder31axial end32of the second shaftopenings233first curved sections34a,34bfirst flat section134a,134bcircumferential curved sections34c,34d,134c,134dsleeve40,140,240axial end41of the sleevesecond bearing raceway42openings243second curved sections44a,44bsecond flat section144a,144bcircumferential curved sections44c,44d,144c,144dbearing assembly50rolling elements52cage54securing element60locking element70 | 9,867 |
11858548 | DETAILED DESCRIPTION With reference toFIG.1, one embodiment of a skid steering system1comprises a variable-speed steering power input device3, an asymmetrical steering differential5, a speed reducer7, a first steering output shaft9and a second steering output shaft11. The asymmetrical steering differential5comprises a first differential shaft6and a second differential shaft8. In this embodiment, the first differential shaft6and the first steering output shaft9are shown as a single shaft, although the two shafts could be separate but connected shafts. The asymmetrical steering differential5is any device that can receive a single power input from the steering power input device3to impart different speed changes on the first differential shaft6and the second differential shaft8in opposite rotational directions, where the second differential shaft8has imparted thereon a greater speed change than the first differential shaft6. The asymmetrical steering differential5is operatively connected to the steering power input device3to receive a single power input from the steering power input device3. Operative connection of the asymmetrical steering differential5to the steering power input device3is accomplished in any suitable manner, for example by connecting a steering input shaft4of the steering power input device3to a housing10of the asymmetrical steering differential5with a steering differential power input chain or belt12. When a steering differential power input chain is used, the steering input shaft4and the housing10may be equipped with sprockets to accept the chain. The steering power input device3may be a variable-speed motor, for example an electric motor or a hydraulic motor. The second differential shaft8is connected to the speed reducer7, the speed reducer7also being connected to the second steering output shaft11. The speed reducer7has the same reducing ratio as the asymmetrical steering differential5so that the second steering output shaft11experiences the same speed change but in the opposite rotational direction as the first steering output shaft9. Thus, while the second steering output shaft11has a lower rotational speed than the second differential shaft8, the second steering output shaft11rotates in the same rotational direction as the second differential shaft8. The speed reducer7may be any suitable device that can transfer rotational power from one shaft to another shaft while resulting in the other shaft having a slower rotational speed. The speed reducer7may comprise a collection of appropriately sized and arranged chains and sprockets, or may comprise a meshed gear arrangement. A planetary reducer comprising a ring gear, a sun gear and one or more planet gears meshing the sun gear with the ring gear is a particularly suitable example of the speed reducer. The first steering output shaft9and the second steering output shaft11are each operatively connected to at least one rotatable ground-engaging element (e.g. wheels, tracks and the like) on respective sides of the vehicle. Operative connection is made by any suitable method, for example direct connection of the first and second steering output shafts9,11to the ground-engaging element, or indirect connection through drive belts or chains. InFIG.1, a first steering input chain13connects the first steering output shaft9to a first axle23of an open-differential propulsion transmission21of a propulsion system20, while a second steering input chain15connects the second steering output shaft11to a second axle25of the propulsion transmission21. The first axle23of the propulsion transmission21provides propulsion power input from a propulsion power input device22(e.g. a gasoline engine, a diesel engine, an electric motor, a hydraulic motor or the like) to the ground-engaging elements on a first side of the vehicle through first drive chains or belts27, and a the second axle25of the propulsion transmission21provides propulsion power input from the propulsion power input device22to the ground-engaging elements on a second side of the vehicle through second drive chains or belts29. The steering system1superimposes power on the propulsion system20to steer the vehicle. When the steering power input device3drives the steering input shaft4in a first rotational direction, speed is added to the first axle23and speed is subtracted from the second axle25causing the vehicle to turn in one direction. When the steering power input device3drives the steering input shaft4in a second rotational direction, speed is added to the second axle25and speed is subtracted from the first axle23causing the vehicle to turn in the other direction. InFIG.1, the asymmetrical steering differential5and the speed reducer7are aligned along one transverse axis. The steering power input device3is aligned along a different transverse axis longitudinally separated from the transverse axis of the asymmetrical steering differential5and the speed reducer7. The propulsion transmission21of the propulsion system20is aligned along yet a third transverse axis longitudinally separated from both the other two transverse axes. It is therefore possible to use simple parts in the steering system1and to place the steering system1at any convenient place along the vehicle. FIG.2depicts a schematic diagram of a skid steered vehicle100comprising one embodiment of the steering system1described in connection withFIG.1. The vehicle100has a chassis101, a pair of right-side wheels142including a front right wheel142aand a rear right wheel142brotatably mounted at a right side of the chassis101, and a pair of left-side wheels144including a front left wheel144aand a rear left wheel144brotatably mounted at a left side of the chassis101. The vehicle further comprises a propulsion system120mounted on the chassis101, the propulsion system120comprising an engine drive shaft122connected to a vehicle engine, which drives a right intermediate shaft123and a left intermediate shaft125through an open-differential propulsion transmission121. The right intermediate shaft123is drivingly connected to a front right wheel axle143aby a front right final drive chain127a,and is drivingly connected to a rear right wheel axle143bby a rear right final drive chain127b.The left intermediate shaft125is drivingly connected to a front left wheel axle145aby a front left final drive chain129a,and is drivingly connected to a rear left wheel axle145bby a rear left final drive chain129b.While a four-wheeled vehicle is shown, for a tracked vehicle only one final drive chain on each side would be needed as front and rear track hubs are connected by the track. For vehicles with six or more wheels, more final drive chains may be employed. The engine drive shaft122provides power input to the propulsion transmission121, which distributes the power to the right and left intermediate shafts123,125, respectively, the right and left intermediate shafts123,125providing power input to the final drive chains127a,127b,129a,129b,which provide power input to the wheel axles143a,143b,145a,145b,which in turn provide power to the wheels142a,142b,144a,144bmounted on the wheel axles143a,143b,145a,145b.The intermediate shafts123,125and the wheel axles143a,143b,145a,145bare provided with sprockets on which the final drive chains127a,127b,129a,129bare mounted. Using appropriately sized sprockets and lengths of chains permits setting the desired power input to the wheels142,144. The right and left intermediate shafts123,125are rotated in the same direction and the vehicle100can be driven forward or backward by operation of the propulsion transmission121in a known manner. The steering system comprises a variable-speed electric steering motor103having a drive shaft104operatively connected to an external housing116of a rotatable planetary reducer105by a steering power input chain112on a sprocket fixedly mounted on the drive shaft104of the electric steering motor103and on a sprocket fixedly amounted on a receiving shaft117unitized with the external housing116of the rotatable planetary reducer105. The rotatable planetary reducer105is rotatably mounted on one transverse side of the chassis101, for example the right side as shown inFIG.2, so that the external housing116is able to rotate, i.e. spin, relative to the chassis101about a first transverse axis when powered by the electric steering motor103. The electric steering motor103is mounted on the chassis101on the same side as the rotatable planetary reducer105so that the drive shaft104rotates about a second transverse axis longitudinally separated from the first transverse axis. The propulsion transmission121is mounted on the chassis101so that the right and left intermediate shafts123,125are aligned with and rotate about a third transverse axis longitudinally separated from both the first and second transverse axes. The rotatable planetary reducer105comprises a first ring gear118(e.g. a 90 T ring gear) rigidly affixed to an inner wall of the external housing116so that the first ring gear118rotates with the external housing116as the external housing116rotates. The rotatable planetary reducer105further comprises a first sun gear119(e.g. a 30 T sun gear) and a first plurality of planet gears126(e.g. 3×30 T planet gears), the first plurality of planet gears126intermeshed with and located between the first ring gear118and the first sun gear119. A steering cross-shaft108is fixed to the first sun gear119and a first output shaft109is fixedly mounted to the first plurality of planet gears126by a first carrier128. The first output shaft109is operatively connected to the right intermediate shaft123by a first steering chain113mounted on sprockets, the sprockets fixedly mounted on the first output shaft109and the right intermediate shaft123. The steering cross-shaft108is also connected to a fixed planetary reducer107non-rotatably mounted to the chassis101on an opposite side of the chassis101from the rotatable planetary reducer105. The fixed planetary reducer107has essentially the same construction as the rotatable planetary reducer105. Thus, the fixed planetary reducer107has a second sun gear133, a second plurality of planet gears132and a second ring gear131, the second ring gear131fixedly attached to an external housing130of the fixed planetary reducer107. Because the fixed planetary reducer107is non-rotatably mounted to the chassis101, the external housing130of the fixed planetary reducer107is unable to rotate relative to the chassis101. A second output shaft139is fixedly mounted to the second plurality of planet gears132by a second carrier134. The second output shaft139is operatively connected to the left intermediate shaft125by a second steering chain115mounted on sprockets, the sprockets fixedly mounted on the second output shaft139and the left intermediate shaft125. While the steering cross-shaft108is shown as a single shaft inFIG.2, the steering cross-shaft108could instead be two separate shafts connected by a coupler. One of the separate shafts could be an input shaft of the rotatable planetary reducer105and the other separate shaft could be an input shaft of the fixed planetary reducer107, the input shafts being connected to the respective sun gears119,133of the rotatable and fixed planetary reducers105,107. When the external housing116of the rotatable planetary reducer105is rotated by the electric steering motor103, the first ring gear118causes imparts a change in rotational speed of the first plurality of planet gears126, which imparts a change in rotational speed of the first output shaft109as well as a change in rotational speed of the first sun gear119. The change in rotational speed of the first sun gear119causes a change in rotational speed of the steering cross-shaft108, which causes a change in rotational speed of the second sun gear133in the fixed planetary reducer107on the opposite side of the vehicle100from the rotatable planetary reducer105. A change in rotational speed of the second sun gear133causes a change in rotational speed of the second plurality of planet gears132, which causes a change in rotational speed of the second output shaft139. If desired or required, a motor speed reducer between the electric steering motor103and the rotatable planetary reducer105can be used to reduce speed from the drive shaft104of the electric steering motor103to the rotatable planetary reducer105, for example a speed reduction in a ratio in a range of 3:1 to 4:1. The motor speed reducer may comprise a planetary reducer or differently sized sprockets on the drive shaft104and the receiving shaft117on the external housing116of the rotatable planetary reducer105. Because the steering cross-shaft108is connected to the fixed planetary reducer107, which is non-rotatably mounted on the chassis101, and also to the first sun gear119, the fixed planetary reducer107constrains the first sun gear119so that rotation of the external housing116of the rotatable planetary reducer105, which imparts a change in rotational speed of the first ring gear118therein, can cause a change in rotational speed of the first sun gear119and the first plurality of planet gears126when the external housing116of the rotatable planetary reducer105is rotated by the steering motor103. Further, as a result of the change in rotational speed of the first ring gear118and subsequent changes in rotational speed of the first sun gear119and the first plurality of planet gears126, the change in rotational speed of the first output shaft109is in an opposite rotational direction from the change in rotational speed of the steering cross-shaft108. However, at the fixed planetary reducer107, the external housing130of the fixed planetary reducer107, and the second ring gear131fixedly mounted therein, cannot rotate relative to the chassis101so the steering cross-shaft108imparts a change in rotational speed of the second sun gear133and the second plurality of planet gears132causing a change in rotational speed of the second output shaft139in the same rotational direction as the steering cross-shaft108. In this way, the rotational speeds of the first output shaft109and the second output shaft139are caused to change in opposite rotational directions when the electric steering motor103is operated. Further, by requiring the reduction ratios of the rotatable and fixed planetary reducers105,107to be the same, the change in speeds of the first output shaft109and the second output shaft139are the same, albeit in opposite rotational directions. Because the first output shaft109is operatively connected to the right intermediate shaft123, and the second output shaft139is operatively connected to the left intermediate shaft125, rotation of output shafts109,139adds speed to or subtracts speed from the intermediate shafts123,125. If desired or required, further speed reducers (e.g. differently sized sprockets at each end of each of the steering chains113,115) may be used to reduce speed from the output shafts109,139to the intermediate shafts123,125, respectively, for example a speed reduction in a ratio in a range of 2:1 to 3:1. Reducing the speed from the output shafts109,139to the intermediate shafts123,125reduces the required torque on the rotatable and fixed planetary reducers105,107, respectively, thereby permitting the use of lighter, less expensive components. Because the two output shafts109,139change rotational speed in opposite rotational directions when the electric steering motor103is operated, speed is added to the intermediate shaft at one side of the vehicle100, and therefore the wheels at that side, and speed is subtracted from the intermediate shaft, and therefore the wheels, at the other side of the vehicle100. This causes the vehicle100to turn toward the side where the wheels are rotating slower. Furthermore, if desired or required, further speed reducers (e.g. differently sized sprockets at each end of each of the final drive chains127a,127b,129a,129b,) may be used to reduce speed from the intermediate shafts123,125to the wheel axles143a,143b,145a,145b,for example a speed reduction in a ratio in a range of 2:1 to 3:1. One advantage of the steering system1lies in the arrangement where all of the operative connections between the various shafts are located proximate one side or the other of the vehicle100in a relatively narrow transverse space extending longitudinally along a length of the vehicle100. Thus, proximate the right side of the vehicle100, the steering power input chain112, the first steering chain113, the front right final drive chain127aand the rear right final drive chain127b,as well as all of the sprockets on which the chains are mounted, are all located in a narrow space between the rotatable planetary reducer105and the right-side wheels142. Likewise, proximate the left side of the vehicle100, the second steering chain115, the front left final drive chain129aand the rear left final drive chain129b,as well as all of the sprockets on which the chains are mounted, are all located in a narrow space between the fixed planetary reducer107and the left-side wheels144. It is therefore possible to include two transversely spaced-apart longitudinally extending enclosed compartments for containing lubricating oil surrounding all of the operative connections (i.e. chain drives) between the various shafts. Thus, a first oil compartment151is formed in the chassis101from chassis beams at the right side of the vehicle100, and second oil compartment152is formed in the chassis101from chassis beams at the left side of the vehicle100. The steering power input chain112, the first steering chain113, the front right final drive chain127aand the rear right final drive chain127b,as well as all of the sprockets on which the chains are mounted, are all located in the first oil compartment151. The second steering chain115, the front left final drive chain129aand the rear left final drive chain129b,as well as all of the sprockets on which the chains are mounted, are all located in the second oil compartment152. The oil compartments151,152are filled with lubricating oil to form oil baths that lubricate the operative connections during operation of the vehicle100, and the only exposed connection is the steering cross-shaft108. The oil compartments151,152are preferably sealed, and may be provided with removable panels to permit access to the operative connections for maintenance and replacement. If desired or due to space constraints, the electric steering motor103and the rotatable planetary reducer105may be located on the left side of the vehicle100while the fixed planetary reducer107on the left side of the vehicle100. Operation of the vehicle100involves a variety of different driving operations including, for example, driving straight forward at full speed (Full Speed Straight, FSS), making a full left turn at zero speed (Zero Speed Full Left Turn, ZSFLT), making a full right turn at zero speed (Zero Speed Full Right Turn, ZSFRT), making a full left turn at full speed (Full Speed Full Left Turn, FSFLT), making a full right turn at full speed (Full Speed Full Right Turn, FSFRT), making a minor left turn at full speed (Full Speed Minor Left Turn, FSMLT), making a minor right turn at full speed (Full Speed Minor Right Turn, FSMLT), and making a low speed full left turn (LSFLT). Table 1 illustrates the rotational velocities (speed and direction) of various components of the vehicle100during the driving operations indicated above. TABLE 1VehicleRotational Velocity (rpm)ComponentFSSZSFLTZSFRTFSFLTFSFRTFSMLTFSMRTLSFLT120 Propulsion System121+485.700+485.7+485.7+485.7+485.7+97.15propulsiontransmission123 right+485.7+85−85+570.7+400.7+497.9+473.6+170intermediateshaft125 left+485.7−85+85+400.7+570.7+473.6+497.9+24.3intermediateshaft142 right+200+35−35+235+165+205+195+70wheels144 left+200−35+35+165+235+195+205+10wheels1 Steering System112 steering0+1812.4−1812.4+1812.4−1812.4+260.2−260.2+1561power inputchain118 first ring0+566.7−566.7+566.7−566.7+81.0−81.0+485gear(rotatablereducer)108 steering+4857.1−850+850+4007.1+5707.1+4735.7+4978.6+242.9cross-shaft109 first+1214.3+212.5−212.5+1426.8+1001.8+1244.6+1183.9+425output shaft(rotatablereducer)139 second+1214.3−212.5+212.5+1001.8+1426.8+1183.9+1244.6+60.7output shaft(fixedreducer) Driving straight at full speed (FSS) causes the first output shaft109, the second output shaft139and the steering cross-shaft108to rotate in the same rotatable direction as the right and left intermediate shafts123,125because the first output shaft109is operatively connected to the right intermediate shaft123, the second output shaft139is operatively connected to the left intermediate shaft125and the steering cross-shaft108is connected to both the first and second output shafts109,139through the sun and planet gears of the rotatable and fixed planetary reducers105,107, respectively. There is no rotational load on the ring gears118,131of the rotatable and fixed planetary reducers105,107, respectively, so the first ring gear118does not rotate thus the external housing116of the rotatable planetary reducer105also does not rotate, and the drive shaft104of the steering motor103also does not rotate. Propulsion power does not flow through the rotatable planetary reducer105to the steering motor103, therefore, when the steering motor103is not operated, the steering motor103and the external housing116of the rotatable planetary reducer105experience little or no torque. Table 1 further shows that the right and left wheels142,144have a lower rotational speed than the right and left intermediate shafts123,125, respectively, because there are speed reducers between the right and left intermediate shafts123,125and the right and left wheels142,144, respectively. Likewise, the speed reducers from the first and second output shafts109,139to the right and left intermediate shafts123,125, respectively, means that rotational speed imparted on the the first and second output shafts109,139by the right and left intermediate shafts123,125, respectively, is increased. In addition, the rotational speed of the steering cross-shaft108is greater than those of the first and second output shafts109,139by a factor of four because both the rotatable and fixed planetary reducers105,107have a ratio of 4:1. When making a zero-speed full turn left (ZSFLT) or right (ZSFRT), the engine is not operated so the rotational speed of the engine drive shaft122is zero. When the drive shaft104of the steering motor103is driven forward (+'ve direction), the vehicle100turns left, and when the drive shaft104of the steering motor103is driven backward (−'ve direction), the vehicle100turns right. The drive shaft104of the steering motor103is driven at top speed causing the external housing116of the rotatable planetary reducer105to rotate in the same direction but at a lower speed due to the speed reducer (3.2:1 ratio) between the drive shaft104and the rotatable planetary reducer105. Rotation of the external housing116of the rotatable planetary reducer105causes the first output shaft109to rotate in the same rotational direction as the external housing116of the rotatable planetary reducer105but at a lower speed, while causing the steering cross-shaft108to rotate in the opposite rotational direction as the external housing116but at a higher speed. The speed ratio between the steering cross-shaft108and the first output shaft109is 4:1 because the rotatable planetary reducer105has a 4:1 ratio. The second output shaft139has the same rotational speed as the first output shaft109but in the opposite rotational direction. The output shafts109,139impart rotational speed on respective intermediate shafts123,125at a ratio of 2.5:1 due to speed reducers between the output shafts109,139and the intermediate shafts123,125, and the intermediate shafts123,125impart rotational speed on respective wheels142,144at a ratio of 2.5:1 due to speed reducers between the intermediate shafts123,125and the wheels142,144. While the rotational speeds of the right and left wheels142,144are the same, the right wheels142rotate in the opposite direction as the left wheels144so the vehicle turns away from the side where the wheel are being driven forward (i.e. towards the side where the wheels are being driven backward). The remaining driving operations illustrate rotational motion of the various vehicle components when the vehicle is both driven and turned. FSFLT, FSFRT, FSMLT and FSMRT illustrate that rotational motions imparted by the steering system1is superimposed on the rotational motions imparted by the propulsion system20because the steering system1is operatively connected to the propulsion system20by the steering chains113,115, even though the steering system1and the propulsion system are otherwise separated. The average speed of the wheels142,144, thus the speed of the vehicle100, is directly proportional to the speed of the engine (i.e. speed of the engine drive shaft122). The rotational speed of the vehicle100(yaw) is directly proportional to the speed of the steering motor103. FIG.3andFIG.4depict a six-wheeled skid steered vehicle200comprising an embodiment of the steering system ofFIG.1. The vehicle200comprises a chassis201on which three pairs of transversely opposed wheels are rotatably mounted. The wheels comprise right wheels242including a right front wheel242a,a right middle wheel242band a right rear wheel242c,and left wheels244including a left front wheel244a,a left middle wheel244band a left rear wheel244c.The right wheels242a,242b,242care mounted on right wheel axles243a,243b,243c,respectively. The left wheels244a,244b,244care mounted on left wheel axles245a,245b,245c,respectively. The wheel axles243a,243b,243c,245a,245b,245care rotatably mounted on the chassis201. The wheels242,244are driven by a propulsion motor (not shown) that is coupled through a main drive shaft (not shown) to a propulsion transaxle224comprising an open propulsion differential221, the propulsion differential221being connected to right and left intermediate shafts223,225, respectively. The propulsion motor drives rotation of the right and left intermediate shafts223,225, respectively. The right wheel axles243a,243b,243care all driven by rotation of the right intermediate shaft223, the right wheel axles243a,243b,243cbeing operatively connected to the right intermediate shaft223by four right final drive chains227(individually labeled as227a,227b,227c,227d). The left wheel axles245a,245b,245care all driven by rotation of the left intermediate shaft225, the left wheel axles245a,245b,245cbeing operatively connected to the left intermediate shaft225by four left final drive chains229(individually labeled as229a,229b,229c,229d). First and second right idler assemblies247a,247b,respectively, and first and second left idler assemblies249a,249b,respectively, are mounted on the chassis201so that four shorter final drive chains may be used on each side of the vehicle200. The first right idler assembly247arotatably supports the right intermediate shaft223while the first left idler assembly249arotatably supports the left intermediate shaft225. The second right idler assembly247brotatably supports a right idler shaft246while the second left idler assembly249brotatably supports a left idler shaft248. The wheel axles (243a,243b,243c,245a,245b,245c) intermediate shafts (223,225) and idler shaft (246,248) all have sprockets250(i.e. right sprockets250aand left sprockets250b) fixedly mounted thereon, on which the final drive chains (227,229) are mounted. All of the right final drive chains227and right sprockets250aare located within a right oil bath compartment251, and all of the left final drive chains229and left sprockets250bare located within a left oil bath compartment252. The oil bath compartments251,252are sealed compartments formed from chassis beams and contain lubricating oil to keep chains and sprockets lubricated during operation of the vehicle200. Similar to the embodiment described in connection withFIG.2, the steering system for the embodiment ofFIG.3andFIG.4comprises an electric steering motor203, a rotatable planetary reducer205having a reduction ratio of 4:1, a fixed planetary reducer207having a reduction ratio of 4:1, a right steering chain213and a left steering chain215. A motor speed reducer202with a reduction ratio of 5:1 is mounted between a drive shaft204of the electric steering motor203and the rotatable planetary reducer205. Power is transmitted from the electric steering motor203to the rotatable planetary reducer205by a steering input chain212, which connects a sprocket253afixedly mounted on an output shaft258of the motor speed reducer202to a sprocket253bfixedly mounted on a receiving shaft217, the rotational speed of the sprocket253bbeing about 3.2× less than the rotational speed of the sprocket253a.The receiving shaft217is fixedly mounted on an external housing of the rotatable planetary reducer205. As previously described, rotation of the external housing of the rotatable planetary reducer205causes a change in rotational speed of an output shaft209. A proximal end of a first differential shaft206is axially aligned with and directly connected to a distal end of the output shaft209so that the change in rotational speed of the output shaft209causes a change in rotational speed of the first differential shaft206. The first differential shaft206is rotationally supported by a third right idler assembly257mounted on the chassis201. The first differential shaft206is operatively connected to the right intermediate shaft223by the right steering chain213, which is mounted on sprockets254a,254bfixedly mounted to the first differential shaft206and the first intermediate shaft223, respectively, the rotational speed of the sprocket254bbeing 2.5× less than the rotational speed of the sprocket254a.The rotatable planetary reducer205has an input shaft210, which is axially aligned in a transverse direction with an input shaft111of the fixed planetary reducer207. The two input shafts110,111are rotationally connected by a coupler208so that a change in rotational speed of the input shaft210of the rotatable planetary reducer205causes a change in rotational speed of the input shaft111of the fixed planetary reducer207. An output shaft239of the fixed planetary reducer207is operatively connected to the left intermediate shaft225by the left steering chain215, which is mounted on sprockets255a,255bfixedly mounted to the output shaft239and the left intermediate shaft225, respectively. Operation of steering system is as described in connection withFIG.2. The steering input chain212, the right steering chain113and the sprockets253a,253b,254a,254binvolved with the steering system at the right side of the vehicle200are located in the right oil bath compartment251. The left steering chain115and all of the sprockets255a,255binvolved with the steering system at the left side of the vehicle200are located in the left oil bath compartment252. Service brakes can be located on the wheel axles of one of the wheel pairs, or on another shaft. In the embodiment shown inFIG.3, a right service brake261is located on the right idler shaft246between the right middle wheel242band the right rear wheel242c;and a left service brake262is located on the left idler shaft248between the left middle wheel244band the left rear wheel244c. The novel features will become apparent to those of skill in the art upon examination of the description. It should be understood, however, that the scope of the claims should not be limited by the embodiments, but should be given the broadest interpretation consistent with the wording of the claims and the specification as a whole. | 31,763 |
11858549 | DETAILED DESCRIPTION The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, summary, or the following detailed description. As used herein, the term “module” refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), a field-programmable gate-array (FPGA), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Embodiments of the present disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure may be practiced in conjunction with any number of systems, and that the systems described herein is merely exemplary embodiments of the present disclosure. For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, machine learning models, radar, lidar, image analysis, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the present disclosure. The subject matter described herein discloses apparatus, systems, techniques and articles for blending manual steering control with a driver assist or semi-autonomous driving feature such as Super Cruise, Ultra Cruise, lane keeping assist (LKA), lane departure warning (LDW), lane centering control (LCC), lane keeping support (LKS), and others without disengaging the driver assist or semi-autonomous driving feature. The following disclosure provides example systems and methods for generating a steering command for controlling a vehicle during vehicle operations that blends manual steering with a driver assist or semi-autonomous driving feature without disengaging the driver assist or semi-autonomous driving feature. The following disclosure provides an example impedance control system and algorithm for blending measured driver steering torque (derived from a steering wheel) into the torque of a trajectory system, such as a lateral control system. FIG.1is a block diagram of an example vehicle100that implements an impedance controller214. The vehicle100generally includes a chassis12, a body14, front wheels16, and rear wheels18. The body14is arranged on the chassis12and substantially encloses components of the vehicle100. The body14and the chassis12may jointly form a frame. The wheels16-18are each rotationally coupled to the chassis12near a respective corner of the body14. The vehicle100is depicted in the illustrated embodiment as a passenger car, but other vehicle types, including trucks, sport utility vehicles (SUVs), recreational vehicles (RVs), etc., may also be used. The vehicle100is capable of being driven manually and semi-autonomously. The vehicle100further includes a propulsion system20, a transmission system22, and a steering system24. The steering system24includes a steering wheel25that is coupled to the wheels16and/or18through a steering column and an axle in a manner that is well understood by those skilled in the art wherein when a driver turns the steering wheel25the wheels16and/or18turn accordingly. The vehicle100further includes a brake system26, a sensor system28, an actuator system30, at least one data storage device32, at least one controller34, and a communication system36that is configured to wirelessly communicate information to and from other entities48. The data storage device32stores data for use in automatically controlling the vehicle100. The data storage device32may be part of the controller34, separate from the controller34, or part of the controller34and part of a separate system. The controller34includes at least one processor44and a computer-readable storage device or media46. Although only one controller34is shown inFIG.1, embodiments of the vehicle100may include any number of controllers34that communicate over any suitable communication medium or a combination of communication mediums and that cooperate to process the sensor signals, perform logic, calculations, methods, and/or algorithms, and generate control signals to automatically control features of the vehicle100. The controller34, in this example, is configured to implement the impedance controller214. The controller34includes at least one processor and a computer-readable storage device or media encoded with programming instructions for configuring the controller. The processor may be any custom-made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), an auxiliary processor among several processors associated with the controller, a semiconductor-based microprocessor (in the form of a microchip or chip set), any combination thereof, or generally any device for executing instructions. The computer readable storage device or media may include volatile and non-volatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the processor is powered down. The computer-readable storage device or media may be implemented using any of a number of known memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or any other electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable programming instructions, used by the controller. The programming instructions may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. FIG.2is a block diagram of an example steering environment200for a vehicle202(e.g., vehicle100). The example steering environment200includes an example steering system204that influences a position of vehicle wheels (e.g., wheels16and/or18). The steering system204includes a trajectory controller206that calculates a vehicle steering angle θd(203) as a command to steer the vehicle202in accordance with a planned trajectory when the vehicle202is driven in autonomously or semi-autonomously. The trajectory controller206may implement a lateral control system and driver assist or semi-autonomous driving features such as Super Cruise, Ultra Cruise, lane keeping assist (LKA), lane departure warning (LDW), lane centering control (LCC), lane keeping support (LKS), and others. The steering system204also includes a power steering system, such as an electric power steering (EPS) system208, that provides power (e.g., electrical power) steering assist for turning vehicle wheels (e.g., wheels16and/or18) in response to a vehicle driver210turning a steering wheel (e.g., steering wheel25) to assist the driver210in steering the vehicle202. The EPS system208also controls the turning of the vehicle wheels (e.g., wheels16and/or18) when the vehicle202is driven in autonomously or semi-autonomously. The EPS system208may only accept steering torque as a command, e.g., a measured driver steering torque command τD(201) derived from the driver210turning a steering wheel (e.g., steering wheel25) and/or a control torque command τC(207) derived from the vehicle steering angle θd(203). A steering controller212is also included in the steering system204for accurately converting the vehicle steering angle θd(203) from the trajectory controller206to a steering control torque command τC(207) for commanding the EPS system208. The example steering system204further includes an impedance controller214for blending a measured driver steering torque command τD(201) into the control torque command τC(207) derived from a trajectory controller206, such as a control torque command τC(207) derived from a lateral control system implemented by the trajectory controller. The example impedance controller214comprises a controller (e.g., controller34) configured by programming instructions on non-transitory computer readable media to generate an impedance-adjusted vehicle steering angle command θr(209) based on a vehicle steering angle command θd(203) that was generated by the trajectory controller206to compensate for a trajectory error e (211) and a measured driver steering torque command τD(201) generated in response to navigation of the vehicle202by the driver210using a vehicle steering wheel. The example impedance controller214enables the EPS system208to generate a steering command based on the measured driver steering torque command τD(201) and the impedance-adjusted vehicle steering control torque command θr(209) to control the turning of the vehicle wheels. The example impedance controller214is configured to generate the impedance-adjusted vehicle steering angle command based on the equation: M({umlaut over (θ)}r−{umlaut over (θ)}d)+B({dot over (θ)}r−{dot over (θ)}d)+K(θr−θd)=τD, wherein M, B, K are tunable control parameters, θDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. In an s-domain representation the example impedance controller214may be configured to generate the impedance-adjusted vehicle steering angle command based on the s-domain equation: θr=θd+1Ms2+Bs+KτD, wherein M, B, K are tunable control parameters, τDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. In a discrete time representation, the example impedance controller214may be configured to generate the impedance-adjusted vehicle steering angle command based on the discrete time implementation equations: θ¨rk+1=θ¨dk+1+1M(τdk+1-B(θ.rk-θ.dk)-K(θrk-θdk));θ.rk+1=θ.rk+θ¨rk+1Δt;andθrk+1=θrk+θ.rk+1Δt, wherein M, B, K are tunable control parameters, t is time, τDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. In the example steering system204, the example steering controller212is configured to generate a first vehicle steering control torque command τC(207) while the vehicle is driven in a semi-autonomous mode. In this mode, the example steering controller212is configured to generate the first vehicle steering control torque command τC(207) based on a difference eθ(213) between the first vehicle steering angle command θd(203) and a measured vehicle steering angle command θm(215). In the example steering system204, the example steering controller212is also configured to generate an impedance-adjusted vehicle steering control torque command based on a difference eθ(213) between the impedance-adjusted vehicle steering angle command θr(209) and a measured vehicle steering angle command θm(215). The example steering system204may be configured to generate the impedance-adjusted vehicle steering control torque command based on the s-domain equation: τC=(Kp+Kis+Kds), wherein Kp, Ki, Kdare known control parameters, and τCis the impedance-adjusted vehicle steering control torque command. The example power steering system (e.g., EPS system208) is configured to generate a steering command based on the measured driver steering torque command and the impedance-adjusted vehicle steering control torque command. In the example steering system204, the trajectory controller206is configured to generate the first vehicle steering angle command θd(203) based on the trajectory error e (211). The trajectory controller206may be configured to generate the vehicle steering angle command θd(203) based on the s-domain equation: θd=(Kpt+Kits+Kdts)e, wherein Kpt, Kit, Kdtare known control parameters, e is the trajectory error, and θdis the first vehicle steering angle command. The example impedance controller214can allow a steering system to blend a measured driver steering torque into the torque of a lateral control system. This can enable a lot of advanced features for a steering system with respect to lateral control. The example impedance controller214is configured to provide variable impedance based on type of vehicle maneuver to allow a steering system to vary variable impedance based on type of vehicle maneuver and its effect on the computed vehicle steering angle command. To employ variable impedance based on type of vehicle maneuver, the impedance controller214is configured to select unique values for the tunable control parameters (M, B, K) based on the type of vehicle maneuver. For example, one set of tunable control parameters may be used while the vehicle is rounding a curve and a different set of tunable control parameters with different values may be used while the vehicle is driving on a straight away. The example impedance controller214is configured to allow a steering system204to adjust steering feel when lateral control features are active. To adjust steering feel when lateral control features are active, the impedance controller214is configured to adjust the computed vehicle steering angle command by selecting unique values for the tunable control parameters (M, B, K) based on the type of vehicle maneuver. For example, the tunable control parameters may be selected to affect driver feel during various maneuvers. The example impedance controller214is configured to allow a driver to hold an intended offset from a control path followed by a semi-autonomous driving system without disengaging the semi-autonomous driving system. To allow a driver to hold an intended offset from a control path followed by a semi-autonomous driving system without disengaging the semi-autonomous driving system the impedance controller214is configured to select values for the tunable control parameters (M, B, K) to allow the intended offset from the control path. For example, a different set of tunable control parameters may be used when the vehicle senses that the driver is attempting to hold an intended offset from the control path to make it easier for the driver to hold the offset. The example impedance controller214is configured to reduce the amount of control torque that opposes an intended driver override maneuver. To reduce the amount of control torque that opposes an intended override maneuver the impedance controller214is configured to select values for the tunable control parameters (M, B, K) that reduce the amount of control torque that opposes the intended override maneuver. For example, a different set of tunable control parameters may be used when the vehicle senses that the driver is attempting an override maneuver to make it easier for the driver to perform the override maneuver. The example impedance controller214is configured to adjust the tunable control parameters (M, B, K) to cater to hardware differences, driving preferences and road conditions. For example, a set of tunable control parameters may be selected or adjusted based on vehicle hardware configuration, a set of tunable control parameters may be selected or adjusted based on a particular driver's driving preferences, and a set of tunable control parameters may be selected or adjusted based on road conditions. FIG.3is a process flow chart depicting an example process300for generating a steering command for controlling a vehicle during vehicle operations. The order of operation within process300is not limited to the sequential execution as illustrated in theFIG.3but may be performed in one or more varying orders as applicable and in accordance with the present disclosure. The example process300includes generating a first vehicle steering control torque command by a steering controller in the vehicle while the vehicle is driven in a semi-autonomous mode (operation302). The generating a first vehicle steering control torque command may include generating the first vehicle steering control torque command by the steering controller based on a difference between the first vehicle steering angle command and a measured vehicle steering angle command The example process300includes generating an impedance-adjusted vehicle steering angle command based on a first vehicle steering angle command that was generated to compensate for a trajectory error, and a measured driver steering torque command generated in response to navigation of the vehicle using a vehicle steering wheel (operation304). The generating an impedance-adjusted vehicle steering angle command may include determining the impedance-adjusted vehicle steering angle command based on the equation: M({umlaut over (θ)}r−{umlaut over (θ)}d)+B({dot over (θ)}r−{dot over (θ)}d)+K(θr−θd)=τD, wherein M, B, K are tunable control parameters, τDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. The generating an impedance-adjusted vehicle steering angle command may comprise determining the impedance-adjusted vehicle steering angle command based on the s-domain equation: θr=θd+1Ms2+Bs+KτD, wherein M, B, K are tunable control parameters, τDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. The generating an impedance-adjusted vehicle steering angle command may include determining the impedance-adjusted vehicle steering angle command based on the following discrete time implementation equations: θ¨rk+1=θ¨dk+1+1M(τdk+1-B(θ.rk-θ.dk)-K(θrk-θdk));θ.rk+1=θ.rk+θ¨rk+1Δt;andθrk+1=θrk+θ.rk+1Δt, wherein M, B, K are tunable control parameters, t is time, τDis the measured driver steering torque command, θdis the vehicle steering angle command, and θris the impedance-adjusted vehicle steering angle command. The example process300includes generating an impedance-adjusted vehicle steering control torque command by the steering controller in the vehicle based on a difference between the impedance-adjusted vehicle steering angle command and a measured vehicle steering angle command (operation306). The generating an impedance-adjusted vehicle steering control torque command comprises determining the impedance-adjusted vehicle steering control torque command based on the s-domain equation: τC=(Kp+Kis+Kds), wherein Kp, Ki, Kdare known control parameters, and τCis the impedance-adjusted vehicle steering control torque command. The example process300includes generating a steering command by a power steering system (e.g., EPS) in the vehicle based on the measured driver steering torque command and the impedance-adjusted vehicle steering control torque command (operation308). The vehicle steering angle command may be generated based on the s-domain equation: θd=(Kpt+Kits+Kdts)e, wherein Kpt, Kit, Kdtare known control parameters, e is the trajectory error, and θdis the first vehicle steering angle command. The example process300includes operating the vehicle in accordance with the steering command (operation310). The generating a steering command for controlling a vehicle during vehicle operations may include blending a measured driver steering torque into the torque of a lateral control system. The blending a measured driver steering torque into the torque of a lateral control system may include employing variable impedance based on type of vehicle maneuver to adjust the computed vehicle steering angle command. The employing variable impedance based on type of vehicle maneuver may include selecting unique values for the tunable control parameters based on the type of vehicle maneuver. The blending a measured driver steering torque into the torque of a lateral control system may include adjusting steering feel when lateral control features are active. The adjusting steering feel when lateral control features are active may include selecting values for the tunable control parameters to improve the steering feel. The blending a measured driver steering torque into the torque of a lateral control system may include allowing a driver to hold an intended offset from a control path followed by a semi-autonomous driving system without disengaging the semi-autonomous driving system. The allowing a driver to hold an intended offset from a control path followed by a semi-autonomous driving system without disengaging the semi-autonomous driving system may include selecting values for the tunable control parameters to allow the intended offset from the control path. The blending a measured driver steering torque into the torque of a lateral control system may include reducing the amount of control torque that opposes an intended override maneuver. The reducing the amount of control torque that opposes an intended override maneuver may include selecting values for the tunable control parameters that reduce the amount of control torque that opposes the intended override maneuver. Described herein are apparatus, systems, techniques and articles for blending manual steering with a driver assist or semi-autonomous driving feature such as Super Cruise, Ultra Cruise, lane keeping assist (LKA), lane departure warning (LDW), lane centering control (LCC), lane keeping support (LKS), and others without disengaging the driver assist or semi-autonomous driving feature. The described subject matter discloses apparatus, systems, techniques and articles provide for blending a driver's steering command with a control command from a trajectory control system, such as a lateral control system, without requiring precise EPS models or noise-free measurements. The described subject matter discloses apparatus, systems, techniques and articles provide for blending various types of driver/controller commands without needing to modify the power steering (e.g., EPS) type or communication protocol. The described subject matter discloses apparatus, systems, techniques and articles may allow for granting the driver adequate steering authority when needed (e.g., when driver assist and/or other semi-autonomous driving features are engaged) and an improved steering feel. The described subject matter discloses apparatus, systems, techniques and articles provide an impedance control algorithm to blend the measured driver steering torque into the torque of the lateral control system. The described subject matter discloses apparatus, systems, techniques and articles provide for blending the measured driver steering torque into the torque of the lateral control system. The described subject matter discloses apparatus, systems, techniques and articles may employ variable impedance depending on the maneuver. The described subject matter discloses apparatus, systems, techniques and articles may improve the steering feel when lateral control features are active. The described subject matter discloses apparatus, systems, techniques and articles may allow the driver to hold an intended offset from the lane center without disengaging a lane centering feature. The described subject matter discloses apparatus, systems, techniques and articles may allow for reducing the amount of control torque that opposes an intended override maneuver without disengaging a driver assist feature. The described subject matter discloses apparatus, systems, techniques and articles may work in the aforementioned ways even when the measured driver steering torque cannot be estimated with high accuracy. The described subject matter discloses apparatus, systems, techniques and articles that can generate a vehicle steering angle command based on the equation: M({umlaut over (θ)}r−{umlaut over (θ)}d)+B({dot over (θ)}r−{dot over (θ)}d)+K(θr−θd)=τD, wherein M, B, K are tunable control parameters. The tunable control parameters may be selectively selected to allow for blending a driver's steering command with a control command from a trajectory control system, improving steering feel, employing variable impedance depending on the maneuver, allowing a driver to hold an intended offset from a lane center without disengaging a lane centering feature, allow for reducing the amount of control torque that opposes an intended override maneuver without disengaging a driver assist feature, and/or allow for accomplishing the aforementioned features when measured driver steering torque cannot be estimated with high accuracy. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 26,501 |
11858550 | DETAILED DESCRIPTION This disclosure details a brace assembly utilized to support high voltage modules of an electrified vehicle powertrain. Exemplary high voltage modules supported by the brace can include an onboard charger, an onboard generator, a converter, an inverter system controller, or some combination of these. The brace is configured to yield in response to a load to provide a desired kinematic response. Referring toFIG.1, a powertrain10of a plug-in hybrid electric vehicle (PHEV) includes a traction battery pack14having a plurality of battery arrays18, an internal combustion engine20, a motor22, and a generator24. The motor22and the generator24are types of electric machines. The motor22and generator24may be separate or have the form of a combined motor-generator. Although depicted as a PHEV, it should be understood that the concepts described herein are not limited to PHEVs and could extend to traction battery packs in any other type of electrified vehicle, including, but not limited to, other hybrid electric vehicles (HEVs), battery electric vehicles (BEVs), fuel cell vehicles, etc. In this embodiment, the powertrain10is a power-split powertrain that employs a first drive system and a second drive system. The first and second drive systems generate torque to drive one or more sets of vehicle drive wheels28. The first drive system includes a combination of the engine20and the generator24. The second drive system includes at least the motor22, the generator24, and the traction battery pack14. The motor22and the generator24are portions of an electric drive system of the powertrain10. The engine20and the generator24can be connected through a power transfer unit30, such as a planetary gear set. Of course, other types of power transfer units, including other gear sets and transmissions, can be used to connect the engine20to the generator24. In one non-limiting embodiment, the power transfer unit30is a planetary gear set that includes a ring gear32, a sun gear34, and a carrier assembly36. The generator24can be driven by the engine20through the power transfer unit30to convert kinetic energy to electrical energy. The generator24can alternatively function as a motor to convert electrical energy into kinetic energy, thereby outputting torque to a shaft38connected to the power transfer unit30. The ring gear32of the power transfer unit30is connected to a shaft40, which is connected to the vehicle drive wheels28through a second power transfer unit44. The second power transfer unit44may include a gear set having a plurality of gears46. Other power transfer units could be used in other examples. The gears46transfer torque from the engine20to a differential48to ultimately provide traction to the vehicle drive wheels28. The differential48may include a plurality of gears that enable the transfer of torque to the vehicle drive wheels28. In this example, the second power transfer unit44is mechanically coupled to an axle50through the differential48to distribute torque to the vehicle drive wheels28. The motor22can be selectively employed to drive the vehicle drive wheels28by outputting torque to a shaft54that is also connected to the second power transfer unit44. In this embodiment, the motor22and the generator24cooperate as part of a regenerative braking system in which both the motor22and the generator24can be employed as motors to output torque. For example, the motor22and the generator24can each output electrical power to recharge cells of the traction battery pack14. With reference toFIG.2, a vehicle60includes the powertrain10. In the exemplary vehicle60, the traction battery pack14is positioned adjacent an underbody of the vehicle60. High voltage modules of the powertrain10are positioned in a front compartment or frunk area of the vehicle60beneath a hood64. In this example, the high voltage modules include a DC/DC converter68, an inverter system controller72, an onboard generator76, and an onboard charger80. In the exemplary embodiment, the high voltage modules are supported by a brace assembly84. For purposes of this disclosure, high voltage is voltage greater than or equal to 60 volts. High voltage modules are modules configured to accommodate voltage greater than or equal to 60 volts. With reference now toFIGS.3and4and continued reference toFIG.2, the brace assembly84, in the exemplary embodiment, includes a cross-brace88, a driver side bridging bracket92, and a passenger side bridging bracket96. The brace assembly84extends in a cross-vehicle direction from a passenger side frame rail100to a driver side frame rail104of the vehicle60. The frame rails100,104extend longitudinally along a length of the vehicle60. In the exemplary embodiment, the cross-brace88, driver side bridging bracket92, and passenger side bridging bracket96are separate and distinct components that are pressure die cast separately from each other. When installed, the driver side bridging bracket92and a driver side of the cross-brace88sandwich a portion of the driver side frame rail104. Similarly, the passenger side bridging bracket96and a passenger side of the cross-brace88sandwich a portion of the passenger side frame rail100. During assembly, the cross-brace88can be moved vertically upward from beneath the frame rails100,104. The cross-brace88can then be secured to a lower surface108of the frame rail100and a lower surface112of the frame rail104utilizing, for example, mechanical fasteners116. The driver side bridging bracket92can then be secured to an upper surface118of the driver side frame rail104and a driver side of the cross-brace88to sandwich the driver side frame rail104. Also, the passenger side bridging bracket96can be secured to an upper surface124of the passenger side frame rail100and to a passenger side of the cross-brace88. The mechanical fasteners116can be used to secure the driver side bridging bracket92and the passenger side bridging bracket96. The multi-piece design of the exemplary brace assembly84thus facilitates assembly and decking of the brace assembly84to the frame rails100,104during vehicle assembly. With the frame rails100,104in an installed position, the cross-brace88can moved vertically upward from beneath the frame rails100,104to an installed position. The driver side bridging bracket92and passenger side bridging bracket96can then be moved to an installed position from vertically above the frame rails100,104. A forward driver side foot140and a rear driver side foot144extend laterally outward on the driver side of the cross-brace88. The feet140,144each include an aperture that receives one of the mechanical fasteners116when secured directly to the lower surface108. The feet140,144can directly contact the lower surface108of the frame rail104when secured to the frame rail104. The passenger side of the cross-brace88includes a forward foot and a rearward foot that extend laterally outward beneath the frame rail100and directly contact the lower surface112when secured directly to the frame rail100. The various high voltage modules, here, the converter68, the converter system controller72, generator76, and charger80are disposed directly atop the cross-brace88when secured to the brace assembly84. The brace assembly84can further be used to support an electric machine, here the motor22, which can be located directly vertically beneath the cross-brace88of the brace assembly84. With reference now toFIG.5and continued reference toFIGS.2-4, the front driver side foot140, in the exemplary embodiment, includes a frangible feature160. In the exemplary embodiment, the frangible feature160is provided by apertures164, which can be holes, slots, or cavities. In the exemplary embodiment, the apertures164are blind apertures. For purposes of this disclosure, blind apertures refers to apertures that do not extend entirely through the front driver side foot140. That is, blind apertures open to either the first side or an opposite, second side of the foot, but not to both the first side and the second side. When a load L (FIG.2) is directed to the vehicle60, the resulting load path can extend through the brace assembly84. Due to the frangible feature160, the cross-brace88tends to fracture in the area of the frangible feature160. This is due to, among other things, a reduced thickness of the front driver side foot140in this area. Due to the frangible feature160, the front driver side foot140is configured to breakaway in response to the load L. The load L necessary to fracture the frangible feature160can be a relatively large load such as a load resulting from an impact event. A front passenger side foot of the cross-brace88can be similarly configured to include blind apertures that can encourage a fracture in a desired area of the brace assembly84when load is applied. Since the frangible feature fractures in response to the load L, the load L does not drive the cross-brace88and the various modules held by the brace assembly84rearward toward a passenger compartment of the vehicle60. Avoiding such movement can be desirable in some situations. The size and placement of the apertures164of the frangible feature160can be designed in such a way to yield during the impact event while the remaining portions of the brace assembly84and modules continue to absorb load. In this example, the apertures164, which again are blind apertures, open to a vertically downward surface of the front driver side foot140. In another example, the apertures164could instead extend entirely through the front driver side foot140could open to a top side of the driver side foot140. FIG.6shows the front driver side foot140after yielding and fracturing in response to the load L. Notably, the load L has fractured the front driver side foot140in the area of the apertures164. The frangible feature160can help to absorb energy while permitting movement of the cross-brace88relative to the driver side frame rail104. The apertures164of the exemplary embodiment can be machined into the foot140of the cross-brace88. In another example, the apertures164is cast to include the aperture. Casting the apertures164can eliminate extra machine operations, time, and reduce costs. Features of the disclosed example include an efficient design solution that can facilitate a desired response to an applied load. The cross-brace can provide the support necessary for various modules while yielding in response to a load to inhibit relative movement of the modules and cross-brace toward a passenger compartment of the vehicle. The preceding description is exemplary rather than limiting in nature. Variations and modifications to the disclosed examples may become apparent to those skilled in the art that do not necessarily depart from the essence of this disclosure. Thus, the scope of legal protection given to this disclosure can only be determined by studying the following claims. | 10,913 |
11858551 | DETAILED DESCRIPTION OF THE EMBODIMENTS The disclosed subject matter will become better understood through review of the following detailed description in conjunction with the figures. The detailed description and figures provide example embodiments of the invention described herein. Those skilled in the art will understand that the disclosed examples may be varied, modified, and altered without departing from the scope of the invention described herein. Throughout the following detailed description, various examples of the ground utility robot, or land care robot units and their configurations are provided. Related features in the examples may be identical, similar, or dissimilar in different examples. For the sake of brevity, related features will not be redundantly explained in each example. Instead, the use of related feature names will cue the reader that the feature with a related feature name may be similar to the related feature in an example explained previously. Features specific to a given example will be described in that particular example. The reader should understand that a given feature need not be the same or similar to the specific portrayal of a related feature or example. The invention will now be described in detail with reference to the attached drawings. As described above in the summary there is a need for a configurable frame body having a single panel, uni-member U-shaped bended frame that is modular, is multi-purpose, that is cost effective, has a reduced part count, that is used as a high strength chassis along with high strength parts and drive units, that features modular assembly and that is rapidly field-repairable. The present invention can be used for myriad types of devices and machines (e.g., human operated devices, autonomous devices, hybrid machines, that is, machines that are a combination of human operated and autonomous). The chassis can be used for a variety of purposes, including but not limited to land care robots. In the area of outdoor field robots, also known as land care robots, or LCRs, there are offerings that are beginning to enter the market, but they are typically large units that are extremely complicated to build and use. More importantly, they are excessively expensive to manufacture and thus purchase. These limitations curtail customer demand. The same is true for human operated machines and hybrid machines. The present invention relates to a configurable chassis for machines and more specifically to a configurable chassis for use with an LCR. This chassis can be used for a variety of other purposes and will be described in detail first. The main purpose of this chassis is to have a frame that is easily and efficiently manufactured, that is designed so that body parts and accessories can be easily added or removed, that has parts that are easily changed out and replaced, even in the field, and that is inexpensive to manufacture.FIGS.11,12and43show the basic frame and body parts and how they go together. All parts are secured one to another using fasteners that are off the shelf bolts, nuts and washers of various size. In its broadest embodiment there is a configurable frame body having a single panel, uni-member U-shaped bended frame200where the frame has a base290, a first side291, a first side upper plate251, a second side292, a second side upper plate251, a first end, a second end and a plurality of cutouts390positioned at predetermined locations in the U-shaped bended frame200. This is an extremely easy yet beautifully simple design. The chassis build starts with a clean, flat, rectangular piece of metal. This piece can be made from steel, stainless steel, aluminum, copper, chromium, titanium, or any other material that is strong and reliable. This clean sheet is first punched, drilled, laser cut, water cut, or any other method useable to place cutouts390about the surface at predetermined positions. This can be done using a cutter or punch and a programmed computer so that the cutouts390are consistent, exact and specific in positioning. After the cutouts are completed the sheet moves to a bending machine. This can be done manually but ideally is performed on a CNC machine or some other controlled device. Computer numerically controlled (CNC) bending is a manufacturing process that is carried out by CNC press brakes (also known as CNC brake presses). These machines can bend sheet metal work from just a few mm across to sections many meters long on the largest industrial machines. This machine bends the metal and forms the basic chassis design. This design has the base290, two sides291,292and a lip or upper plates251on each side and is clearly seen inFIGS.11and12. This upper plate251is to provide an attachment surface for other parts. Once complete, there is a configurable, single panel, uni-member U-shaped bended frame200, as inFIG.11. Next, the frame requires bracing and reinforcing. Bracing is accomplished by adding a number of cross-body connecting members270. The configurable frame body200has at least one cross-body connecting member270affixable at multiple locations about the U-shaped bended frame200at the plurality of cutouts390in the predetermined locations where the at least one cross-body connecting member270has a first end, a second end, and a length that is approximately the same length as a distance between the first side291and the second side292of the configurable frame body200. The assembly also requires fasteners252to connect the at least one cross-body connecting member270to the U-shaped bended frame200, and where the first end of the at least one cross-body connecting member270is affixed to the first side291and the second end of the at least one cross-body connecting member270is affixed to the second side292with the fasteners252. Again, this is extremely efficient in design as the same connecting member can be used at multiple locations around the U-shaped bended frame to provide support. In addition, it is preferable to use the same fastener252for all connections but it is also possible to use a limited number of sized fasteners to accomplish securement. This minimizes manufacturing expense, limits part counts and eases the building process. After the skeletal frame is complete, the body parts are added. The configurable frame body200further has matingly affixable body parts where the matingly affixable body parts are attachable to the configurable frame with the fasteners252at the cutouts390at the predetermined locations. By design, the cutouts390match the mating body parts and again, it is preferable to use the same or substantially similar fasteners252. This design makes it extremely easy to add body parts to the chassis, as is shown inFIGS.11,12and42 FIGS.1-6,15-36show a wide range of rotatable members mounts420and the rotatable members310,320. These rotatable members can be wheels, treads, tracks, balloon tires, aeration wheels, solid rubber wheels, or any other type of rotational member. As shown inFIGS.1-6the configurable frame body has matingly affixable body parts that are a pair of first end rotatable member mounts420affixable to the frame body200at the first end210on the first side291and the second side292, a pair of second end rotatable member mounts420affixable to the frame body at the second end on the first side291and the second side292; and where the rotatable member mounts420are affixable to the frame body with the fasteners252at the mating cutouts390at the predetermined locations. After the member mounts420are in place and secured to the U-shaped chassis200with the fasteners252the rotatable members310,320are added to the mounts420. The rotatable members310,320can be any number of a wide variety of members. As above, the type, style and design of the members are broad and varied and it is up to the user and the application to decide which is best suited for the task at hand. In order for the device to be moveable one or more of the rotatable members310,320is driven. Again, there is a wide range of opportunity with the current design. In one embodiment one set of rotatable members are freewheeling and are not powered or driven. In one embodiment these can be caster wheels, and again, there are a number of configurations possible, several of which are described herein below. They can be plate casters, stem casters, leveling casters, pneumatic or solid rubber casters, side mount casters, a seesaw caster design can also be used, as described below, or any other imaginable caster type. As shown inFIGS.1-6andFIG.23these are easily installed and removed from the chassis. The main idea behind this chassis configuration is to have an easily configurable chassis system that is simple to assemble and simple to add a mechanism or apparatus that provides power. In order to accomplish this other parts need to be added to the chassis. For the apparatus to be driven, at least one of the first end rotatable members or at least one of the second end rotatable members must be power-driven. The device would work by driving only one wheel, but it is preferred that at least one set of members is driven. In another embodiment both sets are driven so that the apparatus is now effectively a four-wheel drive device. In order to drive the rotatable members power must be added to the U-shaped chassis200. In one embodiment the LCR utilizes a differential drive. That is, it is an LCR whose movement is based on two separately driven wheels placed on either side of the robot body. It can thus change its direction by varying the relative rate of rotation of its wheels and hence does not require an additional steering motion. If both the wheels are driven in the same direction and speed, the LCR will go in a straight line. If both wheels are turned with equal speed in opposite directions then the robot will rotate about the central point of the axis, thus providing a zero-turn radius. Otherwise, depending on the speed of rotation and its direction, the center of rotation may fall anywhere on the line defined by the two contact points of the tires. While the LCR is traveling in a straight line, the center of rotation is an infinite distance from the robot. Since the direction of the robot is dependent on the rate and direction of rotation of the two driven wheels, these quantities are sensed and controlled precisely. This differential steering LCR is similar to the differential gears used in automobiles in that both the wheels can have different rates of rotations, but unlike the differential gearing system, this differentially steered system powers both wheels. The vehicle improves on the two wheel (or tracked) differential drive by using casters on the opposite end, to reduce the energy required to turn. The casters can be powered (motors driving them) or un-powered and will self-orient to the direction of motion. Using casters on the far end of the vehicle reduces the load on each driven axle and improves the stability of the robot. Using spring mounted or canti-levered casters further improves the traction of the dual driven wheels since it conforms to the terrain and pushes the center of gravity towards the driven wheels (or tracks). This design provides motion that is easy to program and control and the system itself I simple and is relatively inexpensive. To add power to the U-shaped chassis several things must happen. First, a power source must be added to the chassis. The power source could be a fuel cell, a hydrogen fuel cell, a gas engine, a propane engine, a battery or any other power source but is preferably a rechargeable battery330. The battery330can be configured and/or manufactured to have tie downs to attach it to the chassis, but most likely will sit in a frame that will matingly fit to the chassis200where the frame will have holes or cutouts that matingly fit the cutouts390in the U-frame chassis200at the predetermined locations and then the battery330is secured at these locations using the fasteners252. Next, the rechargeable battery330must be capable of being recharged. In the most preferred embodiment the rechargeable battery330, shown inFIGS.1-3, is recharged from energy collected from at least one solar panel340. The at least one solar panel340is mounted on the chassis200and to do this there is at least one solar panel mount341. In some embodiments, these mounts341are placed and positioned at each corner of the chassis. The solar panel340can be connected directly to the solar panel mount341. Alternatively, a solar panel connector342(e.g., a pole, strut channel) is used where the riser has a first end that is connected to the solar panel mount341. The mounting pole342extends upwardly from the mount341and the solar panel340is then mounted to a second end of the mounting pole342. This system elevates the solar panel340up and away from the chassis. Next, as shown inFIGS.3,4,6,12, and13,42the system needs to provide the collected power to the rotatable members320. In a preferred embodiment the configurable frame body200further has at least one electric motor350affixable to the configurable frame body200, either directly to the frame body or with a mounting bracket of some sort, with the fasteners252at the mating cutouts390at the predetermined locations, at least one gearing mechanism360affixable to the configurable frame body200with the fasteners252at the mating cutouts390at the predetermined locations, where the at least one gearing mechanism360is connectable to the at least one set of power-driven rotatable members320, the power source drives the at least one electric motor350, the at least one electric motor350turns the at least one gearing mechanism360, and the gearing mechanism360transfers power to the at least one power-driven rotatable member320. Here, the at least one electric motor350is affixed to the chassis body200using the fasteners252and as above, it is envisioned that a single type fastener mechanism can be used to secure all parts to the chassis however, it is also possible that a limited number of substantially similar sized and type of fasteners be used. There are many ways to provide the power to the rotatable members but in a preferred embodiment power is transferred using at least one chain361. The chain361can be metal, vinyl or some other material. It could also be a belt. As described above, the present invention is a configurable frame body, that can be used for many different types of apparatus, but here it is envisioned to be used as a ground utility robot100, having a uni-member bended frame200having a first end210, a second end220and a middle section240whereby the uni-member bended frame200is configured to easily connect and remove the following body parts: a first set of rotatable members310removably affixable at the first end; a second set of rotatable members320removably affixable at the second end; at least one battery330; at least one solar panel340; at least one electric motor350; at least one gearing mechanism360that is connectable to the at least one electric motor350and to at least one of the first and the second set of rotatable members310,320; at least one sensor370; and at least one computer system380. This is an extremely simple, yet beautiful and economical design created so that the land care robot, or LCR100, can be assembled quickly, efficiently, economically, and with a variety of off the shelf parts. This invention is also focused on the method of putting the parts together.FIGS.1-12are views of the uni-member bended frame200. In these Figs. the frame is shown after bending. This is a very economical and efficient way to create a simple yet versatile frame for the LCR100. Generally, the present invention is a method of creating a configurable, uni-member, U-shaped bended, frame chassis200and assembling a ground utility robot or land care robot (LCR)100using parts that can be removeably affixed by taking the steps of having a single sheet of material, forming holes390at predetermined locations in the single sheet of material, using the single sheet of material, to form a base290, a first side291, a second side292, a first top plate251and a second top plate251by, configuring the single sheet of material to create the base290, bending the material lengthwise and upwardly from the base290and forming the first side291, bending the material lengthwise and upwardly from the base290and forming the second side292, bending the material lengthwise at a top of the first side and forming the first top plate251, and bending the material lengthwise at a top of the second side292and forming the second top plate292, that is typically a mirror image of the first top plate291, affixing at least one cross-body member270in at least one location with fasteners252to the first side291and the second side292at the holes390at the predetermined locations, and using the fasteners252and the holes390at the predetermined locations for adding, removing or replacing the parts. More specifically, the chassis above is created as follows. The frame starts with a 4×8 foot sheet of metal. Next, the metal is cut with a laser, punch, water jet or other means, to create the variety of holes in the frame. These frame holes390are positioned at desired locations in the frame and are used for different purposes (e.g., attaching all other apparatus and body parts to the frame200). After the holes390are cut into the metal, the metal is bent in two or more locations to create the U-shaped frame200, as seen inFIGS.11-12. In the preferred embodiment, the single metal sheet is bent to create the frame and the frame is bent in the U-shape creating the base290and sides291,292at approximately 90-degrees to each other and then is bent 90-degrees at the top of each of the side walls to create a small top shelf251or top plate. These angles are not fixed at 90 degrees and can be altered, either greater than or less than 90 degrees, depending on the design desired. After the initial frame is cut and bent the side walls251are secured one to another, for example, using at least one reinforcement member, or cross-body connecting member270. This member is typically an L shaped member but is not limited by this shape. It could be uni-strut, or any other type of bracket as long as it provides stability to the chassis. and these brackets, or cross-body connecting members270are secured at the tops of the side walls291,292to provide stability to the walls and thus the entire frame200. They also provide additional attachment points for other body parts. The next steps, as shown inFIG.42, involve adding the body parts, specifically, affixing rotatable member mounts420at a first end210of the first side291and a first end210of the second side292, affixing the rotatable member mounts420at a second end220of the first side291and a second end220of the second side292, affixing rotatable members310,320to the rotatable member mounts420, affixing at least one power supply320to the frame, affixing at least one electric motor350to the frame200, affixing at least one gearing mechanism360to the frame200, using the fasteners252to affix the parts to the bended frame chassis200at the holes at the predetermined locations, connecting the at least one power supply320to the at least one electric motor350, connecting the at least one electric motor350to the at least one gearing mechanism360, connecting the at least one gearing mechanism360to the at least one of the rotatable members320, and powering the at least one of the rotatable member320. The rotatable members can be any of a variety of rotatable members and are easily interchangeable and replaceable. For example, they can be solid rubber wheels, inflated wheels, studded wheels, aerating wheels with members or spikes for ground aeration, balloon wheels, large diameter or small diameter wheels or any other type of imaginable rotatable members. For example, they can be tracks such as those used on snowmobiles or on CATS and as shown inFIGS.27to29. These rotatable members are attached to the frame200using simple connection parts and the simple mounting mechanisms for the rotatable members. InFIG.1andFIG.42the LCR100has a pair of large diameter rear wheels attached to the first end210of the frame200. In certain embodiments, these two or more wheels are attached to each side and at the first end. In one embodiment an axle412is used to secure the wheels to the LCR. To change from two rear wheels to four or more rear wheels, the axle412is removed and a longer axle inserted to accommodate the additional wheels. Likewise, to have two wheels rather than four or more wheels, the longer axle is replaced with the shorter axle. Replacing the axle412itself is a simple process and can be done in the field. The axle diameter and strength is designed and configured for use with four or more wheels and the same axle diameter is used for both the two-wheeled version and the four-or-more wheeled version to minimize the complexity of the LCR. It is entirely possible for the LCR100to have large diameter wheels on both the first end210and the second end220, but in the shown embodiment the front end210first set of rotatable members310are caster wheels410, as can be seen inFIGS.1,2,3,4,5,6,15,18and19. In a first configuration, the casters410are affixed to the mounting members400that can have a built in suspension system420.FIG.23is a view from the front showing how the casters look when mounted to the chassis using the side mounted rotatable member mounting member brackets. The mounting member400itself is extremely simple and is affixable to the frame200using a limited number of fasteners252, that can be bolts, inserted through the frame holes390. Because the wheels and casters are identical it doesn't matter if the mounting member400is affixed to the right side or left side of the frame200. Again, the parts are high strength and are easily replaced, or repaired in the field. Another configuration useable with the present disclosure relates to an assembly for caster wheels410on each side of the LCR and that allows pivoting at a midpoint between the casters410, perpendicular to their alignment with the robot or vehicle. Included are two variations, one which mounts directly to the chassis of the LCR or vehicle, the other which is mounted to a hitch via a hitch adapter. These two configurations are the basis for Caster SeeSaws as shown inFIGS.15-36, and43-44. Aspects of the Caster SeeSaw are presented for use on a robot or LCR or other vehicle in various locations for the purpose of maintaining traction on uneven surfaces. This SeeSaw embodiment is shown in an exploded view atFIG.43. Rotation about a central axis414allows for each caster410to maintain full contact with the ground surface while the robot or vehicle navigates the environment. The casters410are mounted at opposite ends of a caster beam411. The casters410in this embodiment are passive wheels, they are not driven or powered by the machine in any way. FIGS.15-36and43show a variety of Caster SeeSaw embodiments.FIG.15specifically shows an elevated perspective view of the chassis mounted Caster Seesaw andFIG.43is an exploded view that clearly shows all the parts of this embodiment. A second embodiment is generally the same configuration as the chassis mounted but uses a different apparatus to connect to the chassis, as can be seen inFIG.22. Reference is now made toFIG.43. This first embodiment has a caster beam411, or mounting member on which the Caster Seesaw is built, having a top, a bottom, a first end and a second end. Casters410are mounted on the bottom of the caster beam411using mounting holes415where fasteners are used to connect the casters to the mounting member411at the first end and the second end. A hitch630, in this case a receiver for a standard shank/receiver type hitch, is mounted at a midpoint on the top of the caster beam411and is designed to allow accessories connectability to the front of the LCR. An axle414is inserted into and mounted to at least one, and ideally two or more bearing brackets416at the bottom of the caster beam411. Bearings are inside the bearing brackets416and allow for the seesaw to easily pivot about axel414. The axel414is mounted within and secured to the bearing and bearing brackets416with shaft collars641that are at the front and back of the axel414. The rear of the axel414is attached to the LCR using at least one more bearing bracket416and the other shaft collar641to secure the entire part set together. The exploded view inFIG.43allows for clarity of the components involved with the chassis-mounted configuration. The hitch-mounted configuration shows the axle412, mounted to a vertical hitch adapter, as shown inFIGS.22and35. As above, the caster beam400on which all components are mounted, has mounting holes or cutouts415for the bearings mounting member400, a hitch, and for either the flat-plate casters or the stem casters410. The receiver hitch provides a way to attach a variety of implements to be towed by the robot or vehicle. The two plate-mounted bearings416allow the bearing assembly to rotate about the axle412. The axle412, on which the assembly rotates, is mounted either directly to the robot or vehicle chassis200or to a hitch adapter. The two casters410used in the assembly can be mounted to the bottom of the caster beam411or they can be the stem type mounting member that use the holes415in the mounting member caster beam411to secure them to the caster beam411. Mounted at each end of the caster beam411, the casters410allow for two points of contact with the surface and passively move with the robot or vehicle. The shaft collar641is used to maintain the position of the caster beam411and all other components in relation to the axle412. The following figures show different aspects of this unique SeeSaw Caster configuration.FIG.16is a view from the front of the LCR looking into the axel412, showing the hitch attachment mechanism and the casters410. FIG.17is a side view of the chassis-mounted Caster Seesaw. On the left between the caster and mounting plate is the axle which is inserted into the bearings mounted in the chassis of the robot or vehicle. FIG.18is a top view of the chassis-mounted Caster Seesaw. Visible are the hole patterns, or caster beam cutouts417in the caster beam411that allow for the attachment of either the flat-plate casters or stem casters. FIG.19is an opposite view showing the chassis-mounted caster seesaw from the bottom. FIG.20is another front view of the caster mounted beam configuration. FIG.21is another side view of the hitch-mounted caster seesaw. To the left, and mounted to the axle, is axel that is inserted into the seesaw bearing bracket415and then to the frame of the robot or vehicle FIG.22is another side view of the hitch-mounted caster seesaw. To the left, and mounted to the axle, is the vertical hitch adapter that is inserted into the hitch mounted to the frame of the robot or vehicle FIG.24is a side view of the chassis-mounted Caster Seesaw implemented as an attachment to the LCR. FIG.25is a top view of the hitch-mounted Caster Seesaw implemented as an attachment to the LCR looking down from above and clearly showing the solar panels. FIG.26is view of the chassis-mounted Caster Seesaw implemented as an attachment to the LCR. possible robot from the bottom looking up. FIGS.27-29are views of the LCR from above when it has the caster seesaw design for the front rotatable members and tracks utilized as the rotatable members at the rear. The seesaw caster assembly is also not confined to one configuration, as can be seen inFIGS.30-31. Here is the seesaw assembly having two wheels as the passive rotatable members310at each side of the seesaw beam411. Also, the driven or powered wheels are in no way limited by the type of rotatable member employed. Or, by the number of rotatable members employed. For example,FIGS.32and33whereFIG.32shows large traction tires used as the rear, power driven rotatable members.FIG.33also shows large traction tires but in this configuration there are two tires on each side of the LCR. As described above, this is accomplished by using a longer axel412that is easily added to the LCR. As noted above, the LCR is ideally powered by a rechargeable battery330that is typically recharged using the at least one onboard solar panel340. This at least one power supply330is in the preferred embodiment at least one rechargeable battery. After the rotatable members are affixed, the at least one battery330is secured to the frame200. The system is not limited to one battery330but could be any number of batteries and/or type(s) as long as they fit on the machine or are connectable to the machine. For example, it is possible to stack batteries one on top of the other. It is possible to mount them on top of a chassis or frame cover or under the panel, they can be pulled behind or pushed in front of the machine, for example, using an off the shelf cart or anywhere else imaginable, all interconnected using simple wire conduits to connect the packs one to the other. It is also noted that the type of battery is not limited to the existing technology. As battery technology advances, the type, number and configuration will change and the system therefore is not limited to batteries currently available. As shown inFIGS.3and14, these batteries330are placed on the base and are held side to side by the L-bracket cross-body connecting members270and are tied down to the base290by strapping. The batteries in the present configuration are simple lead-acid batteries but they could be lithium-ion or any other imaginable type of battery or storage device as noted above, including fuel cells Next, as shown inFIGS.1,2,10, and15-38, the configurable LCR100has the at least one solar panel340and at least one solar panel mounting member341. The LCR100is fully capable of running on solar power alone so ideally all of the LCRs are equipped with solar panels340that can be directly connected at or slightly above the LCR frame. In a first embodiment the panels are one sided and can provide power to the LCR. Alternatively, the panels can be elevated so that double sided panels may be utilized. In order to elevate the panels the chassis200has a series of solar panel mounting receiver members341that are configured and shaped to match a solar panel mounting pole342. These are generally circular shaped receivers that are located at each corner of the LCR frame290but can be any shape or design, as long as the solar panel mounting poles324matingly fit within the receiver. Then solar panel mounting poles342are used to elevate the solar panels above the main frame. The solar panel mounting poles342are typically cylindrical, but could be any shape, and have a first end that is matingly fitted into the circular shaped panel mounting members341at the base corners. The poles342extend upwardly and have a second end that is connected to the solar panel mounting frame343, as shown inFIG.37.FIGS.4,37through41shows the solar panel mounting members341at each corner of the chassis200. These are easily affixed to the chassis200using the fasteners252and the frame holes390. The solar panel mounting pole342is used in some configurations to elevate the solar panel340above the main chassis200. When used, one end of the solar panel mounting pole342is inserted into the mounting member341and is affixed thereto. A second end of the at least one solar panel mounting member pole342is then connected to the at least one solar panel so that the panel is elevated above the chassis. The distance between the panel and the chassis can be fixed or it can be variable. It is possible to have the panel340be directly connect to the chassis so that there is no space, or, as just described, it is possible to have connectors, extenders, and other mounting mechanism to raise or elevate the solar panel above the chassis at a predetermined distance. Once in place the solar panel340is operatively, or electrically connected to the at least one power supply330, providing power from the at least one solar panel340to charge and recharge the at least one power supply330. The frames are made from Unistrut or channel or any other framing material, including bamboo or other renewable material. The mounting poles342, which also can be made from a variety of environmentally friendly materials such as recycled metals or even bamboo, are connected to the Unistrut frame that forms an upper mount for the panels at the second end of the mounting poles342. This configuration raises the solar panels above the LCR100and thus provides space between the bottom of the solar340panel and the LCR100. This space allows for the use of bifacial solar panels. Bifacial solar panels produce power from light that hits both sides of the panel. Using dual-sided solar cells provides more surface area to absorb sunlight, and therefore, higher efficiency in the same form factor. This system also creates a variety of solar panel mounting orientation variants. For example, one panel can be mounted lengthwise. Or, a wider mounting system can be created and additional panels added side by side. In fact, the system has been tested using three panels configured side by side. In certain embodiments the panels can be mounted widthwise. In certain embodiments the panels can be moved (e.g., tilted, stacked). The more panels the more power and so this opens up the system to longer working hours and also provides the ability to energize more power-hungry machines. Another problem with having the solar panels directly mounted to the LCR, or even slightly above the LCR, is that it restricts access to the internal parts, such as the battery, computer system, and connectors. This problem is solved by adding at least one hinge344to the panel340.FIG.37shows the solar panel of the present invention but instead of having the panel directly connect to the chassis there is another frame built below the panel that then has the at least one hinge344. This frame reinforces the solar panel so that it can be freely moved, elevated or pivoted without damaging the solar panel. The at least one hinge344is connected to the frame. This hinge can be any type of hinge as long as it provides a pivot for the solar panel. InFIG.37there are two hinges, one at each corner of the panel. In this configuration the panel can be tilted up and out of the way, exposing the body parts underneath. This tiltability provides a variety of benefits. First, as noted and shown inFIG.38, it allows easy access to the internal workings of the LCR as now it can simply be tilted up rather than having to remove the entire panel. Next, it is possible to use this tiling mechanism to enable greater solar production. For example, the panel could be tilted upwards in the morning and late afternoon and the LCR programmed to work with the panel facing East or West in order to collect more light and it could be programmed to lay flat during mid-day hours. This change in angle could be performed manually or it could be programmed into the system and could utilize electronic mechanisms to pivot the panels. Finally, this pivot can be used as a dumping mechanism, as described below. As noted earlier, this LCR has the ability to pull and push cargo carrying apparatus but in its basic, standard form, there are no places/ways or limited places/ways to store or carry cargo on or about the LCR itself. Another goal of the present disclosure is to overcome these and other limitations. Another unique invention related to the LCR of the present invention is its potential use as a cargo carrying apparatus, shown inFIGS.39-41. A first embodiment starts with a frame that is constructed either on or near the perimeter of the solar panel or panels. This frame can be either horizontally or vertically adjacent to the solar panel or panels. In one embodiment it is a U-channel frame that is constructed either around, on or near the perimeter of the solar panel or panels, as shown inFIG.37. This Fig. shows a base framing along with a pair of hinges344at two corners. InFIG.38at two ends of the solar panel the channels are used to sandwich the solar panel in order to create a frame upon which another platform can be built. This framing can be extremely substantial so that a platform can be built directly upon the framing and cargo can be carried thereon. In this embodiment the hinges344allow the platform to tilt to allow dumping of cargo from the platform or whatever cargo carrying apparatus is employed. Again, this dumping action can be performed manually or mechanically. This tilting ability also allows users easy access to the internal components of the LCR without having to entirely remove the solar panel. In a second embodiment, shown inFIGS.39-41, a second frame, similar to the one shown inFIG.38, is built and positioned above, around or sandwiching the solar panel frame. In this configuration a platform can be placed on this bottom framing or the area can be left open so that the panel can continue to collect unimpeded light. In this embodiment this frame is either level with or slightly above the solar panel frame and further is designed to support and erect perpendicular objects, pillars, or risers642at specified locations around the perimeter, as is shown inFIG.39. In this second embodiment risers642are either permanently or removeably affixed, at various locations around the perimeter, and extend upwardly either directly from the solar panel frame or upwardly from a frame constructed next to, outside of, or directly on top of the perimeter of the solar panel frame, as is shown inFIG.39. These risers642can be made from uni-strut or any other material that can provide the required support for a top frame and a platform. They can also be actuators, spring or power pistons or any other apparatus to assist in pushing up the upper frame and platform. Mounted to the tops of these risers642is the second frame, as seen inFIG.39, that is then used as the base for an elevated platform that can carry a wide variety of carrying apparatus, including but not limited to the platform alone, a basin or bowl or any other cargo carrying apparatus. This second frame is ideally located slightly above the solar panel340to allow at least some light to access the solar panel340to continue generating electricity. This second frame is similar in design to a car roof rack that holds skis or bicycles. This second frame, when affixed to the pillars642and in place, ideally does not interfere with the general operation and light collection of the solar panels. It is designed so that it does not cover or obstruct the solar panel or panels and so that these solar panels can still collect as much sun light as possible without interference caused by the frame. In a preferred embodiment the first frame is constructed around the outside of the solar panel frame, the pillars642extend upwardly therefrom, and the second frame is constructed at the tops of the pillars642at the varying locations around the perimeter, as seen inFIG.39. In this way the solar panels remain exposed and open to unimpededly receive sunlight. In either case it is envisioned that the rack is made from metal channel or bars and that these bars or channels still allow for sunlight to contact the solar panels with little interruption. The Figs. all show the second frame alone, without the platform attached. When in use, this second frame, either in the first embodiment that is adjacent to or just slightly above the solar panel frame, or the second embodiment that has some separation between the solar panel frame and the second frame, is used as a load bearing frame for attachments thereto. Obviously, when the platform is in use the solar panels will be covered, at least partially, and will not be as effective as when uncovered. However, it is envisioned that attachments, such as panels, platforms, etc. can be easily removed in order to free up the solar panels so that they may generate power as desired. This top frame can be seen inFIG.39. FIG.40shows one configuration where the solar panel and frame configuration has a number of hinges. One set allows the second frame to pivot and another set allows both the solar panel and the top frame to pivot. Thus, the attachments can either be removed entirely, or, they can be pivoted up and out of the way. There are a number of ways to use this first and second frame configuration. First, a platform can be built or placed directly on top of the second, elevated frame. This platform can be made from a variety of products. It can be plywood, 2×4s, metal, plastic, or any other material that can form a platform base. The frames and/or this platform can extend end to end and side to side of the existing solar panel but also can be any desired configuration and in any length. That is, it is not restricted to the width or length of the solar panels or the LCR but in fact can be longer or wider than the panel itself or longer and wider than the LCR itself. It is preferable that it not be shorter than the actual solar panels as it is designed to protect the panels from whatever cargo is loaded onto the platform, and also provides weatherproofing for the LCR's internal working components. However, if a platform is constructed on the first frame then it can be designed to cover and protect the solar panel and in this embodiment the second platform can be of any size or shape as the solar panel is protected via the first frame platform. In any embodiment, the platform or platforms then can be used to carry cargo directly thereon. Additionally, the platform can be used as a base for other cargo carrying apparatus, such as a large bucket or buckets, boxes, containers, bicycle racks, ski racks, yard equipment racks, or any other rack or container used to carry cargo. There are an unlimited number of cargo carrying apparatus that could be affixed or carried by one or even two platforms. In some instances, it is beneficial to raise the second rack and platform up and out of the way in order to allow more sunlight access to the solar panels or to easily dump the cargo from the platform or from attached cargo carrying members. In one embodiment this is accomplished by having the platform tilt, or pivot upwardly so as to allow more sunlight access to the solar panels, or to cause cargo to slide off the platform using gravitational assist, as shown inFIG.41. This is accomplished by having at least one pivoting edge or side. In one embodiment the panel is hinged on one side in one or more locations to allow, for example (a) for easy access to the area below the panel; and (b) to tilt the panel to increase the solar efficiency depending upon the location of the sun, as shown inFIG.40. This allows the operator to get at the internal parts of the robot. It also allows the solar panel to be adjusted so that it can collect more sunlight in differing positions. The hinged side, or pivot side, can use a variety of different methods to create this tilting or pivoting edge. It could be something like a piano hinge, a door hinge, a living hinge, a pivot member, a butt hinge, spring hinge, or basically any type of pivot or hinge that will allow the member to pivot or rotate. This is also extremely useful for having the platform tilt so that it can dump whatever it is carrying. This effectively turns the robot into a mini-dump truck and is shown inFIGS.39and40. Also as shown inFIGS.39,40and41, there can be two sets of hinges344. InFIG.39, as inFIG.38, there are hinges344connected to the solar panel itself that allows the panel to pivot in order to access the internal parts of the LCR beneath the panel. In addition, there are hinges344located on the second frame above the solar panel. This set allows the second frame to tilt independent of the solar panel itself. To assist with this tilt lift actuators620are employed. At the top of these lift actuators are strut balls that are inserted into the strut channel and that allow the strut to slide as the actuators are extended and the frame is lifted and elevated. This lifting system creates the dumping ability, as described previously.FIG.39shows the system with dual hinges in a closed position.FIG.40shows the dual hinges with the first, lower set of hinges pivoting the solar panel and the second frame up to allow access to the LCR internal components, andFIG.41shows the dual hinges but with only the second frame pivoting and lifting to perform the dumping function. There are also a number of ways to move or tilt the panel. In a very inexpensive and simple version the panel can simply be moved manually. A user would manually pull the panel upwards in order to access the internal components of the robot or to dump cargo from the platform. In another embodiment the system uses the lift actuator620such as a piston or spring to assist in lifting the panel. This embodiment requires a combination of human assistance and mechanical assistance. There is at least one piston or spring but there can be more than one and at more than one side. Multiple lift actuators620such as pistons or springs would make it easier for the platform to pivot as they could provide more assistance and more lift to the platform when being pivoted. In a more sophisticated embodiment the system includes powered or motorized pistons or actuators that use motors to automatically pivot, tilt or dump the platform. This system is preferred as it does not require human lifting assistance. The motors are powered by the onboard battery and solar power system. This system can be controlled by the user with push buttons on the robot, or remotely from an app on a phone, for example, or it could be programmed to operate at specific times. If the robot is used to carry dirt from one location to another, it can be programmed to automatically dump the dirt at the desired location after arrival. Obviously, there are a wide variety of applications that could be programmed into the system. In the above-described embodiment the platform system only tilts in one direction. This is limiting in actual use and therefore the system can also have multiple hinged sides. Here there are hinges on more than one side, thus providing a way for the panel to tilt or lift in multiple directions. When configured optimally, this allows the platform to tilt in an unlimited number of directions. This is accomplished with a variety of side hinges. When configured properly the sides can each pivot. This allows for dumping cargo in any number of directions, such as front, back, side and other side. It is also possible to have this same system incorporated into the solar panel framing so that the solar panel or panels can be moved in a wide variety of directions. In this way, the solar panels could actually track the sun when in use, thus collecting more light throughout the day. This multi-pivot system can be accomplished by having a central pivot point, or something like a ball joint, that allows the panel to pivot, rotate, spin, etc. in unlimited directions. This ball joint or pivot member is placed on top of the solar panel by using a series of supports for the ball and then the ball centrally located at a main axis point. In this configuration the panel could dump in any direction as long as the system is mounted far enough above the panel to allow for enough tilt to create a gravitational dump. The cargo carrying platform would also need to be slightly larger in area than the solar panel in order to allow the dump and not have material fall on the solar panel below. Because the solar panels are, by their nature, somewhat delicate, it is important to protect them from damage and harm. Obviously, when carrying cargo items on top of the solar panels it is preferable to have some sort of protection for the panels. In one alternative embodiment the solar panels are protected by placing additional bracing, struts, plywood, framing or other materials over and possibly around the outside of the solar panels. This is particularly useful and beneficial if the LCR is shipped or moved to another location. In one embodiment these panels could be taken off and actually used as a shipping crate for the LCR. Once it arrives at its destination the panels are pulled off and used on top as the platform, or even as a trailer. In some embodiments the construction around the panel is extended vertically downward by placing vertical struts and/or plywood to create a crate, such as a full crate, or skeletal crate. This crate acts as a shipping crate or packaging crate. In some embodiments a receptacle, or cargo box is created above the strut channel frame to store and/or haul cargo. Cargo is anything, such as logs, sand, gravel, rocks, debris, yard waste, compost, dirt or snow, for example, if used for land care. Obviously if not used for land care the cargo could be virtually anything. The cargo box is made from plywood or other material. It can be rails, or bars, or any material that can form a storage unit. As described above, it is possible to have the platform tilt in order to dump its contents, so if the cargo box is on top of the platform it is preferable in some instances to have a swing end or swing side, that allows for easily dumping contents from the platform. This is desirable but not required. As the unit could potentially carry very heavy cargo, it is important that the strut base is assembled from strut channel of sufficient strength to withstand the potential carrying weights envisioned. The cargo box, and the platform, can be configured in a wide variety of designs. For example, it could have slatted sides, wooden sides, metal sides or plastic sides. These side panels and platform can be slatted or solid. There can be a lining used to secure cargo within the cargo box. For example, the liner could be plywood or other simple building materials. However, if it is desired that the cargo be a fluid or liquid, then it is also possible to use a tarp or plastic liner so that certain types of cargo, such as sand, gravel, water, or other fluids, may be held within the cargo box without loss. It is also important that the platform be securely attached to the strut channel. This also can be accomplished in a number of ways. One method includes using clamps, such as those that are commonly used to secure solar panels to roofs. As there are numerous ways to secure to strut channel it is to be understood that this invention is not limited by one type of application. The LCR100uses electric power generated from the solar panels to power the LCR. The LCR100has a unique and yet simple system for installing and maintaining this system where the system includes at its most basic configuration the electric motor350, the at least one gearing mechanism360and the at least one chain361. When assembled, the electric motor350as shown inFIGS.7-9and12through14is connected to a chain reduction or gearing mechanism360. This gearing mechanism360has a first face365that is secured to the uni-member bended frame. Once in place the at least one chain361is connected between the gearing mechanism360and a rotatable member gear.FIGS.7-9show a first chain reduction system, useable with the LCR.FIG.7shows the system connected to the electric motor, ready for installation.FIG.8is an exploded view of this same chain reduction system andFIG.9shows a top view of the same chain reduction system when in place in the LCR.FIGS.7and8also show a chain tensioning system.FIGS.12-14show a second, preferred chain reduction embodiment.FIG.12shows this second chain reduction embodiment installed within the LCR chassis.FIG.13. is a view from the top of this second chain reduction embodiment, andFIG.13is an exploded view of this preferred system. In this preferred system the vehicle uses a four-stage chain reduction that reduces the RPMs at a motor shaft368that is typically 4000+ RPM to the desired rotation at a driven axle. In this embodiment it is entirely possible to use off the self, robust but low-cost bearing blocks and pillow blocks for bearings to hold the axles of each stage, with spherical bearing joints to tolerate misalignment. This design provides an extremely high torque drive train. This design and configuration allow the vehicle chassis to flex without negatively affecting this high torque drive train. This embodiment features a composite chain sprocket that is composed of a large sprocket welded to a smaller sprocket that then provides a compact design and the ability to get a large reduction ratio on the same shaft. These composite sprocket components can either be welded together or connected with bolts. On the last stage of the reduction, it is preferable to use a double chain (ANSI #50-2, vs ANSI #50-1 in earlier stages) so that 1000s of ft·lb of torque can be applied. As with the other parts of the chassis, the chain reduction system is designed in such a way that it is field repairable. The motor can easily be removed and replaced, all shafts can be removed and replaced, and all the bearing blocks can also be easily replaced. As explained above, once in place the at least one chain361is connected between the gearing mechanism360and the rotatable member gear connected to the driven shaft, connected to the wheels or tracks that enable the robot to move. The reductions have an output shaft, with a sprocket, connected with a chain, to another sprocket, which is attached to the driven shaft and/or axle. The fact we can quickly detach the chain, and remove/replace reductions, plus, change the driven shaft and axle quickly, is valuable. In addition, the system uses chains everywhere, it is easy to repair and service. In either configuration, with use and over time the chain eventually begins to stretch. This is a known problem called chain elongation and the current invention provides an easy solution.FIGS.7and8show an elevated perspective view and an exploded view of the electric motor350and the first embodiment of the chain reduction assembly360. The at least one chain361is connected to the system over the teeth and pulls during use away from the tensioning bolt362. As the chain elongates it becomes important to tighten the chain so that it continues to perform properly. This is accomplished in the present invention by simply rotating the nut on the end of the tensioning bolt362. The bolt is connected to the reduction assembly360at a cross member at a lower side of the reduction assembly and as the nut is tightened it pulls the entire gearing assembly away from the chain direction, thus tightening the chain or chains. On the side of the frame there is also a tensioning slot363as shown inFIGS.7and8that assists the bolt362in keeping the system tight and aligned. In this embodiment the tensioning system described above is entirely manual. That is, a user must periodically check the system to make sure that the chain is tight and that there is no slack present and then if there is slack the user must manually tighten the nut on the bolt362in order to tighten the chain or chains. Alternatively, it is possible to have an entirely computer-maintained system whereby a sensor is installed to monitor the chain and as it elongates and becomes loose the sensor will send a warning to the onboard computer. The computer will then send a command to an electronic tightening apparatus that will basically perform the identical function as if done manually but it will be accomplished automatically and electronically. Tension sensors convey data to the computer in order to properly set the chain tension. A second embodiment of the chain reduction system is shown inFIGS.12,13and14. In this system. This is an entirely new chain reduction system. This system also can have issues with chain elongation but it in this embodiment there is a tensioning gear that keeps the chain taught. This entire system as a whole is safer than using a standard farm machine or tractor because it removes the human from many common tasks. First, it does not require human interaction to perform tasks such as mowing, plowing shoveling, tilling and many more. Next, the system is charged via solar panels and onboard batteries, thereby removing additional human interaction. Finally, the system is equipped with other safety measures, including emergency stops, or E-Stops500.FIG.1shows an E-Stop500located at the front upper right-hand corner affixed to the solar panel mounting rack. This is a basic stop that is push activated, typically by a human user, to immediately stop certain actions or all actions on the LCR100(for example, by cutting all power to the motors and/or by applying a brake). This provides a means for a human user who is monitoring the system to immediately stop the unit from certain actions or all actions. However, there may not always be a human user or operator available, so it is also possible to place E-Stops500at a variety of locations around the LCR100. For example, there could be a series of E-Stops500placed across the front of the LCR so that if the LCR does not stop when it is supposed to, and it accidently continues forward motion, then when the E-Stop500contacts an obstacle the obstacle will trigger the E-Stop500and the LCR will stop in reaction to the trigger. Likewise, the E-Stops500can be placed across the sides and/or back of the LCR or anywhere on the LCR where there is the possibility of colliding with obstacles. The ability for any one or all of the stops to work is possible because the E-Stops are daisy chain connected so that when any one of the E-Stops500is triggered the LCR100will cease to power motion. The LCR100is also created for and designed to pull or push any number of attachment apparatus or accessory. These accessories could be snowplows, snow blowers, shovels or blades, rakes, mowers, trimmers, sprayers, spreaders, feeders, discs, chain harrows, or any other apparatus or accessory that can be affixed to the LCR via the hitching mechanisms. The LCR is also equipped to use receivers, hitches and other connectors (e.g., a standard receiver/shank type receiver and/or a 3-point hitch system). These apparatuses can either be connected to the LCR manually whereby the user physically connects (and disconnects) the apparatus to the LCR via either the receiver/shank system or the 3-point hitch, or the LCR can be equipped with Smart-Connect whereby the LCR100itself, using the onboard computer, can automatically engage and connect (and disconnect) an accessory to the LCR100. Any of the accessories can be configured to be either manually attached or electronically attached. Also, the accessories, when powered, can either get power mechanically or electrically. If electrically it is preferred that they use power from the onboard battery330but it is also possible to have their own battery or power supply built into or on the accessory. In addition to being able to pull or push these accessories it is also possible as just noted to power them using the LCR100onboard batteries. Many accessories, such as snow blowers or tills or mowers, require power to operate. There are a variety of ways to connect the LCR to these accessories so that the accessories can use the LCR's power rather than use fossil fuel. In addition to having the ability to connect apparatus accessories to both the front and back of the LCR100, the LCR100is also equipped and configured to connect a standard, off the shelf front loader610apparatus, as shown inFIG.5. This front loader610is easily connectable to the LCR100and again can use mostly off the shelf parts, including the loader apparatus and the gas strut. The front loader610also requires power to lift and lower the shovel or front loader attachment and again it is possible to power the front loader610entirely by using power from the LCR100. Although the invention has been described with reference to the preferred embodiments illustrated in the attached drawing figures it is noted that equivalents may be employed and substitutions made herein without departing from the scope of the invention as recited in the claims. All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited. In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure. Having thus described the various embodiments of the invention, what is claimed as new and desired to be protected by letters patent includes the following. | 65,231 |
11858552 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In order to clearly explain embodiments of the present invention, parts irrelevant to the description are omitted, and the same reference numerals are assigned to the same or similar elements throughout the specification. Since the size and thickness of each component shown in the drawings are arbitrarily indicated for convenience of description, the present invention is not necessarily limited to those shown in the drawings, and the thicknesses are enlarged to clearly express various parts and regions. In addition, in the following detailed description, the reason that the names of components are divided into first, second, etc. is to classify them in the same relationship, and it is not necessarily limited to the order in the following description. Throughout the specification, when a part includes a certain element, it means that other elements may be further included, rather than excluding other elements, unless specifically stated otherwise. In addition, terms such as . . . part . . . described in the specification mean a unit of a comprehensive configuration that performs at least one function or operation. When a part, such as a layer, film, region, plate, etc., is “on” another part, this includes not only the case where it is directly above the other part, but also the case where there is another part in between. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. Exemplary embodiments of the present invention will hereinafter be described in detail with reference to the accompanying drawings. FIG.1is an external perspective view of a vehicle body to which a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may be applied. Referring toFIG.1, a vehicle body10to which a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may be applied includes a front door12and a sliding door14may be mounted in the opposite direction in which the front door12is mounted. The vehicle body10may be a doorless vehicle body10without a door of the front passenger seat13in front of the sliding door14. The side of the front passenger seat13has relatively high strength because there is no door, but the mount position of the sliding door14has relatively low strength. Therefore, the difference in strength between the side of the passenger seat13and the sliding door14is large, so that the damage of the sliding door14is relatively large during a side collision of the vehicle, which may be a risk to the safety of the occupant. FIG.2is a partial perspective view of the sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention viewed from the outside of the vehicle body, andFIG.3is a partial perspective view of a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention viewed from the inside of the vehicle body. FIG.4is a partially exploded perspective view of a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention,FIG.5is a perspective view of a sliding door lower reinforcement that may be applied to a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention, andFIG.6is a cross-sectional view along the VI-VI line inFIG.2. Referring toFIG.1toFIG.6, a sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may include a sliding door lower reinforcement20of which a guide rail50(seeFIG.4andFIG.6) that guides movement of the sliding door14is mounted therein mounted on the side of the vehicle body10along a length direction of the vehicle body10, and a center floor cross unit60mounted in the width direction of the vehicle body10. And, the sliding door lower reinforcement20and the center floor cross unit60may be combined. When the sliding door lower reinforcement20and the center floor cross unit60are combined, the strength of the part where the sliding door14is mounted may be increased, and thus the difference in strength with the part where the passenger seat13is positioned may be reduced. The guide rail50may be mounted on the sliding door lower reinforcement20with its cross-section in an inverted “U” shape to guide the movement of a lower roller18. The lower roller18may be mounted to the sliding door14via a sliding door roller bracket16. The sliding door lower reinforcement20may include a main body22on which the guide rail50is mounted, and a longitudinal direction extension30formed extending from the main body22along the length direction of the vehicle body10. The center floor cross unit60may include a first center floor cross member62coupled to the vicinity of one end of the longitudinal direction extension30. In addition, the center floor cross unit60may further include a second center floor cross member64coupled to the vicinity of the connection portion of the main body22and the longitudinal direction extension30. The sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may further include a center pillar80disposed in the height direction of the vehicle body10and combined with the sliding door lower reinforcement20. The center pillar80may be disposed between the first center floor cross member62and the second center floor cross member64in the length direction of the vehicle body10. The sliding door lower reinforcement20may further include a center pillar upper coupling part32formed to extend in the height direction of the vehicle body10to couple with the center pillar80. The longitudinal direction extension30may include center floor cross unit coupling parts36and37formed in a concave shape so that the center floor cross unit60is inserted and coupled. The first center floor cross member62and the second center floor cross member64may be coupled to the center floor cross unit coupling parts36and37, respectively, and the center pillar upper coupling part32may be formed between the center floor cross unit coupling parts36and37. That is, the longitudinal direction extension30is formed to extend forward of the vehicle body10rather than the center pillar80to increase the strength of the side of the vehicle body. Therefore, the impact load transmitted from the center pillar80during a side impact of the vehicle is transmitted to the first center floor cross member62and the second center floor cross member64through the center pillar upper coupling part32and the center floor cross unit coupling parts36and37of the longitudinal direction extension30, so that the impact load may be distributed. In addition, since the first center floor cross member62and the second center floor cross member64support the longitudinal direction extension30, the difference in strength between the part where the sliding door14is mounted and the part where the passenger seat13is positioned is reduced. The longitudinal direction extension30may be formed to extend inside the vehicle body10. That is, the longitudinal direction extension30extends to the inside of the vehicle body10, and it is possible to increase the strength of the coupling portion between the center pillar80and the first center floor cross member62and the second center floor cross member64, so that it is possible to improve the performance of vehicle side collisions. The longitudinal direction extension30may include an extension upper surface38extending inward of the vehicle body10, an extension side surface40curved from the extension upper surface38and of which the center floor cross unit coupling parts36and37are formed thereto, an extension lower surface42curved from the extension side surface40, and a center pillar lower coupling part34that is curved from the extension lower surface42to the lower direction and is coupled to the center pillar80. The center pillar lower coupling part34may be formed to extend downward in the width direction of the vehicle body10. The center pillar80may include a center pillar inner panel82coupled with the center pillar upper coupling part32and a center pillar outer panel84coupled with the center pillar lower coupling part34. FIG.7is a cross-sectional view along the line VII-VII inFIG.2. Referring toFIG.2toFIG.7, the sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention further includes a side sill outer92provided in the width direction of the vehicle body10, and the side sill outer92may connect the center pillar lower coupling part34and the center pillar outer panel84. The sliding door lower reinforcement20and the center pillar80may form a center pillar closed cross-section86inside the coupling part. The sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention further includes a center floor70coupled to the upper portion of the center floor cross unit60, and the center floor70, the center floor cross unit60and the sliding door lower reinforcement20may form center closed cross-sections72and74. The sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may further include a side sill inner90provided inside the center pillar closed cross-section86. The center pillar inner panel82and the center pillar upper coupling part32may be welded at point W1, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. The center pillar outer panel84and the side sill outer92may be welded at point W2, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. A side sill outer flange94may be formed at one end of the side sill outer92, and a center pillar lower coupling part flange35may be formed at one end of the center pillar lower coupling part34. The side sill outer flange94and the center pillar lower coupling part flange35may be welded at point W3, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. The guide rail50may be welded at point W4to the extension upper surface38, and the welding direction may be formed in the height direction of the vehicle body10. The center floor70may be welded at point W5to the extension side surface40, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. The second center floor cross member64may be welded at point W6to the extension side surface40, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. As shown inFIG.7, the first center floor cross member62may be welded at point W7to the extension side surface40, and the welding direction is formed in the width direction of the vehicle body10to respond to the vehicle body side impact load. The sliding door lower reinforcement20and the center pillar80may form the center pillar closed cross-section86inside the coupling part, and the center floor70, the center floor cross unit60and the sliding door lower reinforcement20may form the center closed cross-sections72and74adjacent to the center pillar closed cross-section86. The double closed cross-section structure of the center pillar closed cross-section86and the center closed cross-sections72and74may respond with the vehicle body side impact load. The side sill inner90is mounted in the space formed by the extension lower surface42, the center pillar lower coupling part34and the center pillar80to increase the length direction strength of the vehicle body10and the width direction strength of the vehicle body10. The space utilization may be increased by mounting the sliding door roller bracket16and the lower roller18using the inner space of the center pillar closed cross-section86, and the sliding door roller bracket16may bear the impact load in case of a vehicle body side collision. FIG.8is a cross-section perspective view along the line VIII-VIII inFIG.2. Referring toFIG.8, the sliding door mounting reinforcement structure according to an exemplary embodiment of the present invention may further include a cross reinforcement100connecting the first center floor cross member62and the side sill inner90. The cross reinforcement100may respond to the side impact load by connecting the first center floor cross member62and the side sill inner90. A bulk head96may be mounted near the connection position of the cross reinforcement100and the side sill inner90to increase the lateral strength. As described above, even if the vehicle body to which the sliding door mounting reinforcement structure according to the embodiments of the present invention is applied is a doorless vehicle body without a door in front of the sliding door, the rigidity may be uniform, so that passengers may be protected from a side collision of the vehicle body. While this invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. | 13,952 |
11858553 | It should be understood that the accompanying drawings are not necessarily to scale, but provide a somewhat simplified representation of various preferred features that exemplify the basic principles of the present disclosure. For example, specific design features of the present disclosure, including particular dimensions, directions, positions, and shapes, will be partially determined by the particularly intended application and use environment. DETAILED DESCRIPTION OF THE EMBODIMENTS The terms used in the present specification are for explaining the exemplary embodiments, not for limiting the present disclosure. The singular expressions used herein are intended to include the plural expressions unless the context clearly dictates otherwise. It is to be understood that the term “comprise (include)” and/or “comprising (including)” used in the present specification means that the features, the integers, the steps, the operations, the constituent elements, and/or component are present, but the presence or addition of one or more of other features, integers, steps, operations, constituent elements, components, and/or groups thereof is not excluded. The term “and/or” used herein includes any one or all the combinations of one or more listed related items. In the present specification, the term ‘coupled’ means a physical relationship between two components which are connected directly to each other or connected indirectly through one or more intermediate components by welding, a self-piercing rivet (SPR), a flow drill screw (FDS), a bonding agent for a structure, or the like. The terms ‘vehicle’, ‘for a vehicle’, and ‘automobile’ or the similar terms used in the present specification generally include vehicles (passenger automobiles) including passenger vehicles, sport utility vehicles (SUVs), buses, trucks, and various commercially available vehicles and include hybrid vehicles, electric vehicles, hybrid electric vehicles, hydrogen power vehicles, and other alternative fuel vehicles (e.g., fuel induced from other resources from petroleum). Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIGS.1and2are perspective views illustrating a rear vehicle body structure according to an embodiment of the present disclosure,FIG.3is an outer side view illustrating the rear vehicle body structure according to the embodiment of the present disclosure, andFIG.4is a cross-sectional view illustrating the rear vehicle body structure according to the embodiment of the present disclosure. Referring toFIGS.1to4, a rear vehicle body structure100according to an embodiment of the present disclosure may be applied to a rear structural body of a vehicle made by connecting various types of rear structures. In this case, a rear suspension (not illustrated) well known to those skilled in the art may be mounted on the rear vehicle body structure100. The rear suspension may support a load, which is inputted to a vehicle body, through a suspension spring1. In the present specification, ‘a forward/rearward direction’ of the vehicle body may be defined as a longitudinal direction of the vehicle body, a ‘vehicle width direction’ may be defined as a leftward/rightward direction of the vehicle body, and an ‘upward/downward direction’ may be defined as a height direction of the vehicle body. In addition, in the present specification, an ‘inner side in the vehicle width direction’ may be defined as an inner region (e.g., an inner surface) between components facing and spaced apart from each other, and an ‘outer side in the vehicle width direction’ may be defined as an outer region (e.g., an outer surface) between the components. In the present specification, the terms ‘upper end portion,’ ‘upper portion’, ‘upper end’ or ‘upper surface’ of a component means an end portion, a portion, an end, or a surface of the component which is disposed at a relative upper side, and the terms ‘lower end portion,’ ‘lower portion’, ‘lower end’, or ‘lower surface’ of a component means an end portion, a portion, an end, or a surface of the component which is disposed at a relatively lower side. In addition, in the present specification, an end (e.g., one end or the other end) of a component means an end of the component in any one direction, and an end portion (e.g., one end portion or the other end portion) of a component means a predetermined portion of the component that includes the end of the component. The rear vehicle body structure100according to the embodiment of the present disclosure has a structure capable of reducing the number of components used to mount the suspension spring1and reducing the cost and weight of the vehicle body. In addition, the embodiment of the present disclosure provides the rear vehicle body structure100capable of effectively supporting and dispersing upward and downward loads inputted through the suspension spring1, improving rigidity at a load input point, and improving performance in transmitting the upward and downward loads. To this end, the rear vehicle body structure100according to the embodiment of the present disclosure basically includes two opposite rear side members10, spring seats30, outer reinforcing members70, and inner reinforcing members80. In the embodiment of the present disclosure, the two opposite rear side members10extend in THE forward/rearward direction of the vehicle body from a rear portion of the vehicle body and are respectively disposed at two opposite left and right sides in the vehicle width direction. Wheelhouse panels3are respectively mounted on the two opposite rear side members10. For example, the two opposite rear side members10may each have a quadrangular box shape. The two opposite rear side members10may each include an inner surface10aand an outer surface11in the vehicle width direction, and an upper surface12and a lower surface13in the upward/downward direction. In this case, two opposite ends of a rear floor panel5are connected to inner surfaces10aof the two opposite rear side members10. As another example, the two opposite rear side members10may each be manufactured by an aluminum extrusion process method. As still another example, the two opposite rear side members10include one or more vertical ribs15formed inside the two opposite rear side members in the longitudinal direction. The one or more vertical ribs15are configured to improve structural rigidity of each of the two opposite rear side members10. The one or more vertical ribs15are vertically connected to the inner surface in the longitudinal direction of the two opposite rear side members10. In the embodiment of the present disclosure, the spring seat30may support and fix the suspension spring1in the upward/downward direction. The spring seat30is positioned at a lower side of each of the two opposite rear side members10. The spring seat30may extend along the lower surface13and the outer surface11of each of the corresponding two opposite rear side members10and be coupled to the lower surface13and the outer surface11. For example, the spring seat30may be an aluminum die-cast member31manufactured by an aluminum die casting process method well known to those skilled in the art. Hereinafter, the spring seat30applied to the rear vehicle body structure100according to the embodiment of the present disclosure will be described in detail with reference toFIGS.1to4and the accompanying drawings. FIGS.5and6are perspective views illustrating a coupling structure of a spring seat applied to the rear vehicle body structure according to the embodiment of the present disclosure, andFIGS.7to9are perspective views illustrating a spring seat part applied to the rear vehicle body structure according to the embodiment of the present disclosure. Referring toFIGS.1to9, the spring seat30according to the embodiment of the present disclosure includes a spring mounting part41and a rib reinforcing part51which are integrally connected to each other. The spring mounting part41supports and fixes the suspension spring1. The spring mounting part41may be coupled to the lower surface13of each of the two opposite rear side members10. The spring mounting part41includes a spring support surface43, one or more fastening ribs45, and a spring fixing boss47. The spring support surface43supports an upper end of the suspension spring1and is in close contact with the lower surface13of each of the two opposite rear side members10. The one or more fastening ribs45extend outward from an edge of the spring support surface43and are fastened to the lower surface13of each of the two opposite rear side members10. For example, the fastening rib45may be provided in plural. The plurality of fastening ribs45may be disposed radially at the edge of the spring support surface43. As another example, the plurality of fastening ribs45may be fastened to the lower surface13of each of the two opposite rear side members10by screws61. In this case, the screw61may be a flow drill screw (FDS) well known to those skilled in the art. However, the present disclosure is not limited thereto, and the plurality of fastening ribs45may be joined to the lower surface13of each of the two opposite rear side members10by welding, a self-piercing rivet (SPR), a flow drill screw (FDS), a bonding agent for a structure, or the like well known to those skilled in the art. Further, the spring fixing boss47fixes the upper end of the suspension spring1. The spring fixing boss47is formed at a lower side of the spring support surface43. For example, the spring fixing boss47may have a conical shape extending upward and downward from the inside of the edge of the spring support surface43. The rib reinforcing part51improves rigidity of the spring mounting part41. The rib reinforcing part51is integrally connected to the spring mounting part41and coupled to the outer surface of each of the two opposite rear side members10. The rib reinforcing part51may extend upward and downward from the spring mounting part41and be coupled to the outer surface11of each of the two opposite rear side members10. The rib reinforcing part51includes a fastening surface53, a box rib55, and one or more inner ribs57. The fastening surface53is in close contact with the outer surface11of each of the two opposite rear side members10and extends in the upward/downward direction (vertical direction) from the spring support surface43along the outer surface11of each of the two opposite rear side members10. For example, the fastening surface53may be fastened to the outer surface11of each of the two opposite rear side members10by the one or more screws61. In this case, the one or more screws61may each be a flow drill screw (FDS). However, the present disclosure is not limited thereto, and the fastening surface53may be joined to the outer surface11of each of the two opposite rear side members10by welding, a self-piercing rivet (SPR), a flow drill screw (FDS), a bonding agent for a structure, or the like well known to those skilled in the art. The box rib55is integrally connected to the fastening surface53. The box rib55extends outward in the vehicle width direction from the fastening surface53. The box rib55has a box space56opened at an upper end thereof. For example, the box space56may be a quadrangular box space. Further, the one or more inner ribs57are integrally connected to the box rib55in the box space56. For example, the inner rib57extends in a direction perpendicular to the fastening surface53and is integrally connected to the box rib55. The inner rib57may be disposed in the upward/downward direction in the box space56. Meanwhile, referring toFIGS.1to6, in the embodiment of the present disclosure, the outer reinforcing member70improves rigidity of the outer surface of each of the wheelhouse panels3. The outer reinforcing member70may be coupled to the outer surface of each of the wheelhouse panels3and the outer surface11of each of the two opposite rear side members10and connected to the spring seat30. The outer reinforcing member70may be coupled to the rib reinforcing part51and the box rib55. The outer reinforcing member70includes a rib coupling portion71and one or more reinforcing protrusion portions73which are connected to one another. The rib coupling portion71is coupled to the box rib55and has a shape protruding outward in the vehicle width direction from an outer surface of the outer reinforcing member70. The one or more reinforcing protrusion portions73transmit the upward and downward loads, which are inputted to the two opposite rear side members to the wheelhouse panel3. The one or more reinforcing protrusion portions73extend upward and downward from the rib coupling portion71. For example, the one or more reinforcing protrusion portions73may branch off from the rib coupling portion71into at least two portions and may be disposed in the upward/downward direction. Referring toFIGS.1to6, in the embodiment of the present disclosure, the inner reinforcing member80improves rigidity of the inner surface of each of the wheelhouse panels3while corresponding to the outer reinforcing member70. The inner reinforcing member80may correspond to the outer reinforcing member70and may be coupled to the inner surface of each of the wheelhouse panels3and the upper surface12of each of the two opposite rear side members10. The inner reinforcing member80may be coupled to the inner surface of each of the wheelhouse panel3and the upper surface12of each of the two opposite rear side members10by one or more joint flanges81formed at the edge portion. Hereinafter, an operation of the rear vehicle body structure100according to the embodiment of the present disclosure described above will be described in detail with reference toFIGS.1to9. First, the two opposite rear side members10are provided, and the spring seat30is provided as the aluminum die-cast member31corresponding to each of the two opposite rear side members10. The wheelhouse panel3is mounted on each of the two opposite rear side members10. The inner reinforcing member80is coupled to the inner surface of each of the wheelhouse panels3. The inner reinforcing member80is coupled to the upper surface12of each of the two opposite rear side members10. The spring seat30extends along the lower surface13and the outer surface11of each of the corresponding two opposite rear side members10and is coupled to the lower surface13and the outer surface11. The spring seat30includes the spring mounting part41and the rib reinforcing part51which are integrally connected to each other. The spring mounting part41includes the spring support surface43, the one or more fastening ribs45, and the spring fixing boss47. Further, the rib reinforcing part51includes the fastening surface53, the box rib55, and the one or more inner ribs57. In a state in which the spring support surface43is in close contact with the lower surface13of each of the two opposite rear side members10, the plurality of fastening ribs45are each fastened to the lower surface13of each of the two opposite rear side members10by the screws61. Further, the fastening surface53integrally connected to the spring support surface43is fastened to the outer surface11of each of the two opposite rear side members10by the one or more screws61. Further, the box rib55having the box space56is integrally provided on the fastening surface53, and the one or more inner ribs57are integrally formed in the box space56. Furthermore, the outer reinforcing member70including the rib coupling portion71and the one or more reinforcing protrusion portions73is coupled to the outer surface of each of the wheelhouse panels3and coupled to the outer surface11of each of the two opposite rear side members10. The rib coupling portion71is coupled to the box rib55of the rib reinforcing part51. The rear suspension (not illustrated) is mounted on the rear vehicle body structure100according to the embodiment of the present disclosure, which is assembled as described above, and the suspension spring1of the rear suspension is fixed to the spring fixing boss47of the spring mounting part41. Therefore, the rear vehicle body structure100according to the embodiment of the present disclosure may support and absorb the upward and downward loads, which are inputted through the suspension spring1, through the spring mounting part41and radially disperse the upward and downward loads to the two opposite rear side members10. Further, the rear vehicle body structure100according to the embodiment of the present disclosure may transmit the upward and downward loads to the wheelhouse panels3through multiple load paths LP formed by the spring mounting part41and the rib reinforcing part51. In this case, the spring mounting part41and the rib reinforcing part51may transmit the upward and downward loads to the wheelhouse panel3through the outer surface11of each of the two opposite rear side members10. The spring mounting part41and the rib reinforcing part51may transmit the upward and downward loads to the wheelhouse panel3through the vertical rib15of each of the two opposite rear side members10and the inner reinforcing member80. The spring mounting part41and the rib reinforcing part51may transmit the upward and downward loads to the wheelhouse panel3through the outer reinforcing member70. In this case, the rib reinforcing part51may disperse the upward and downward loads, which are transmitted to the box rib55, to the wheelhouse panel3through the outer reinforcing member70. The outer reinforcing member70may disperse the upward and downward loads to the wheelhouse panel3through the reinforcing protrusion portions73that branch off into the two portions from the rib coupling portion71coupled to the box rib55. That is, the upward and downward loads may be dispersed to the wheelhouse panel3through the load path LP formed on each of the two reinforcing protrusion portions73. Therefore, the rear vehicle body structure100according to the embodiment of the present disclosure may effectively disperse the upward and downward loads, which are inputted through the suspension spring1, through the spring seat30and easily transmit the upward and downward loads to the wheelhouse panel3through the multiple load paths LP. Further, according to the rear vehicle body structure100according to the embodiment of the present disclosure, the spring seat30is made of an aluminum die-cast material, which makes it possible to reduce the weight and cost of the vehicle. Furthermore, according to the rear vehicle body structure100according to the embodiment of the present disclosure, the spring seat30does not require separate connection components to reinforce the structure thereof, which makes it possible to reduce the number of components and reduce the weight and cost of the vehicle body. While the exemplary embodiments of the present disclosure have been described, the present disclosure is not limited to the embodiments. The present disclosure covers all modifications that can be easily made from the embodiments of the present disclosure by those skilled in the art and considered as being equivalent to the present disclosure. | 19,256 |
11858554 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, some embodiments of the present disclosure are described in detail with reference to the accompanying drawings in order for those skilled in the art to be able to readily practice the embodiments. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present disclosure. Further, in embodiments, since like reference numerals designate like elements having the same configuration, a first embodiment is representatively described, and in other embodiments, only different configurations from the first embodiment will be described. The drawings are schematic and are not illustrated in accordance with a scale. The relative sizes and ratios of the parts in the drawings are exaggerated or reduced for clarity and convenience in the drawings, and the arbitrary sizes are only examples and are not limiting. The same structures, elements, or parts illustrated in no less than two drawings are denoted by the same reference numerals in order to represent similar characteristics. When a part is referred to as being “on” another part, it can be directly on the other part or intervening parts may also be present. Embodiments of the present disclosure specifically show one embodiment of the present disclosure. As a result, various modifications of the drawings are anticipated. Accordingly, the embodiments are not limited to certain forms of the regions illustrated, but may include forms that are modified through manufacturing, for example. Hereinafter, with reference to the accompanying drawings, a cowl reinforcement structure of a vehicle according to an embodiment of the present disclosure will be described in detail. FIG.1is a perspective view showing a shape of a vehicle body to which a cowl reinforcement structure of a vehicle according to an embodiment of the present disclosure is applied, andFIG.2is a top plan view showing a shape of a vehicle body to which a cowl reinforcement structure of a vehicle according to an embodiment of the present disclosure is applied. Referring toFIG.1andFIG.2, the cowl reinforcement structure of the vehicle according to an embodiment of the present disclosure is applied for a reinforcement of a cowl upper panel10connected to a dash panel50. The cowl upper panel10is bent at a predetermined angle along the vehicle front and rear direction to form a curved line, and is connected to the dash panel50downwards. The cowl reinforcement structure of the vehicle includes a cowl upper side reinforcement member20and a cowl support bracket30. The cowl upper side reinforcement member20is disposed in the lower part of the cowl upper panel10in a shape corresponding to the curved line of the cowl upper panel10. In addition, the cowl support bracket30is joined in the vertical direction with the cowl upper side reinforcement member20in the lower part of the cowl upper panel10. The cowl upper side reinforcement member20and cowl support bracket30are bonded to each other as shown inFIG.2, and may be disposed on both sides (a part ‘a’) of the point where the cowl upper panel10is divided into quarters in the width direction of the vehicle. FIG.3is a view showing a shape of a cowl reinforcement structure of a vehicle in a vehicle lateral direction according to an embodiment of the present disclosure, andFIG.4is a view showing a shape of a cowl reinforcement structure of a vehicle in a vehicle lower direction according to an embodiment of the present disclosure. Referring toFIG.3andFIG.4, the cowl upper side reinforcement member20includes an upper flange21, a body23integrally extending from the upper flange21and having a curved shape, and a lower flange25extending integrally from the body23. The body23of the cowl upper side reinforcement member20may include an opening24in a form of a slot formed in the length direction of the cowl upper side reinforcement member20. A slot-shaped opening22may be additionally disposed at a portion of the body23adjacent to the upper flange21. In addition, the cowl upper side reinforcement member20may be formed of a soft thin plate panel. In this way, by forming the openings22and24in the form of the slot shape on the body23of the cowl upper side reinforcement member20and forming the cowl upper side reinforcement member20of the soft thin plate panel, when the vehicle collides with a pedestrian, the cowl upper side reinforcement member20may absorb the impact smoothly, thereby reducing the pedestrian's injury value. On the other hand, the upper flange21of the cowl upper side reinforcement member20and the upper surface of the lower flange25are joined to the front lower surface of the cowl upper panel10. Further, the cowl support bracket30may include a first end joined to the lower flange25of the cowl upper side reinforcement member20, and a second end joined to the cowl lower panel40disposed in the lower part facing the cowl upper panel10. The upper surface of the first end of the cowl support bracket30may be joined by welding to the lower surface of the lower flange25in the approximately central part of the cowl upper side reinforcement member20, and the cowl support bracket30may be disposed on the same line as the cowl upper panel10in the height direction of the vehicle. By the mutual connection structure of the cowl upper side reinforcement member20and the cowl support bracket30, the bonding structure in the cowl upper panel10of the cowl upper side reinforcement member20and the bonding structure in the cowl lower panel40of the cowl support bracket30, the strength of the cowl upper panel10may be reinforced and improved and the NVH performance may be improved. FIG.5is a perspective view showing a cowl reinforcement structure of a vehicle according to an embodiment of the present disclosure,FIG.6is a cross-sectional view of a cowl reinforcement structure of a vehicle taken along a line A-A ofFIG.5according to an embodiment of the present disclosure,FIG.7is a cross-sectional view of a cowl reinforcement structure of a vehicle taken along a line B-B ofFIG.5according to an embodiment of the present disclosure, andFIG.8is a perspective view showing a shape in which a cowl reinforcement structure of a vehicle according to an embodiment of the present disclosure is coupled by welding. Referring toFIG.6, the upper flange21of the cowl upper side reinforcement member20and the upper surface of the lower flange25are joined to the front lower surface of the cowl upper panel10, thereby forming a space between the cowl upper panel10and the body23. Then, the first end of the cowl support bracket30is joined to the lower surface of the lower flange25, and the second end of the cowl support bracket30is joined to the upper surface of the cowl lower panel40. The cowl support bracket30may have a curved shape that is convex toward the center of the vehicle. Referring toFIG.7, the cowl upper side reinforcement member20may be formed in a shape having a phase in which the central portion is lower than the edge portion in the width direction of the vehicle. That is, the upper surface of the edge portion of the cowl upper side reinforcement member20may be joined to the lower surface of the cowl upper panel10. Therefore, a space may be formed between the central portion of the cowl upper side reinforcement member20and the cowl upper panel10. By forming the space between the cowl upper panel10and the body23of the cowl upper side reinforcement member20, and forming the space between the central part of the cowl upper side reinforcement member20and the cowl upper panel10, the vibration input from the front of the vehicle may be absorbed, thereby reducing the noise inside the vehicle. In addition, the collision absorption performance may be improved by allowing the cowl upper panel10to be deformed smoothly when the vehicle collides with pedestrians. Referring toFIG.8, the upper flange21of the cowl upper side reinforcement member20and the lower surface of the lower flange25may be joined to the lower surface of the cowl upper panel10by double welding at the ‘b’ part. In addition, the first end of cowl support bracket30may be joined to the lower surface of the lower flange25of the cowl upper side reinforcement member20at the ‘c’ part by triple welding. That is, in the ‘c’ part, the upper surface of the first end of the cowl support bracket30may be welded to the lower surface of the lower flange25of the cowl upper side reinforcement member20, and the upper surface of the lower flange25of the cowl upper side reinforcement member20may be welded to the lower surface of the cowl upper panel10. In this way, by applying the cowl reinforcement structure of the vehicle according to an embodiment of the present disclosure, while satisfying the elegant design from the front of the bumper of the vehicle to the roofline, it is possible to satisfy the pedestrian protection performance and the NVH performance. While the present disclosure has been described in connection with what is presently considered to be practical embodiments, it is to be understood that the disclosure is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. | 9,398 |
11858555 | DETAILED DESCRIPTION Embodiments will be described in detail with reference to the drawings. The following description applies embodiments to a lower body structure of a vehicle but is not limited thereto, the application thereof, or the use thereof. Example 1 Example 1 will be described with reference toFIGS.1to22. First, the overall structure of a vehicle V will be described. In the following description, it is assumed that the direction of arrow F is the front side, the direction of arrow L is the left side, and the direction of arrow U is the upper side. As illustrated inFIGS.1to4, the vehicle V is configured by a monocoque body and includes a floor panel that forms the bottom surface of a vehicle interior R, a dash panel1, formed so as to rise up from the front end portion of this floor panel, that separates an engine room E from the vehicle interior R in the vehicle width direction, and a pair of left and right front side frames2extending forward from the front surface of this dash panel1, and a pair of left and right rear side frames that extend backward from the rear end portion of the floor panel. This vehicle V further includes a cowl portion3, formed on the top of the dash panel1, that extends in the vehicle width direction and a pair of left and right strut towers4(suspension tower members) that bulge toward the inside of the engine room E. This vehicle V may be equipped with an independent strut suspension. The cowl portion3is formed in a tub shape by press-forming a steel plate. This cowl portion3mainly includes a cowl panel5, a cowl member6, projecting forward from the front end portion of the cowl panel5, that forms a tub shaped structure in cooperation with the cowl panel5, and a cowl grill7that partially covers the upper portions of the cowl panel5and the cowl member6. As illustrated inFIG.4, a pair of left and right mounting brackets8projecting forward are disposed in the left and right end portions of the front wall of the cowl member6. The pairs of mounting brackets8are formed in a substantially partial ellipse in plan view and the stud bolts9extending vertically upward from the upper surfaces thereof are provided. It should be noted that the cowl member6can be formed integrally with the dash panel1, so the dash panel1and the cowl member6correspond to the dash member. As illustrated inFIGS.1to4, the pair of the strut towers4project upward. Specifically, the strut towers4bulge into the engine room E from the wheel aprons hung between the apron reinforcements10and the front side frames2that extend forward and backward. Since the structure of the vehicle V is substantially symmetrical, the right side members and the right side structure will be mainly described below. Each of the strut towers4includes a hollow cylindrical portion4ahaving an axial center that shifts upward toward the rear side, and an annular top portion4bthat closes the upper end portion of this cylindrical portion4a. A plurality of stud bolts11extending upward are erected on the top portion4b. This strut tower4partially accommodates the upper portion of the damper mechanism (such as the damper and the spring) of the front suspension device. The spring seat coupled to the upper end portion of the damper mechanism is fastened and fixed to the top portion4bby a plurality of fastening members via a mount rubber. Next, a strut tower bar20will be described. As illustrated inFIGS.1to4, this vehicle V is provided with the strut tower bar20that structurally couples the pair of strut towers4to the cowl member6via a plurality of fastening members. This strut tower bar20is substantially U-shaped in plan view and can suppress the behavior modes (the vehicle body torsional mode and the membrane vibration mode) of the vehicle body that affect the riding comfort. Here, the behavior modes of the vehicle body will be described. The vehicle body torsional mode is a behavior mode used when the vehicle is turning. As illustrated by the arrow in (a) ofFIG.5, the top portions4bof the strut towers4are displaced in the vertical direction due to the expansion and contraction of the damper mechanism when the vehicle is turning. The vehicle body torsional mode about the center axis of the vehicle body occurs due to the vertical displacement of the top portions4b, causing degradation of steering stability. The membrane vibration mode is a behavior mode used when the vehicle travels on a rough road surface. As illustrated by the arrow in (b) ofFIG.5, when the vehicle travels on a rough road surface, the strut towers4fall inward in the vehicle width direction while the cowl portion3is displaced in the vertical direction like a bow. The torsional displacement between the top portions4band the cowl portion3generates the membrane vibration mode on the panel member, especially on the floor panel having a large area, and causes degradation of riding comfort. The strut tower bar20will be described again. As illustrated inFIGS.6and7, the strut tower bar20mainly includes a pair of left and right first coupling members30that shift to the inside in the vehicle width direction toward the rear side, a second coupling member40, extending in the vehicle width direction, that couples the rear end portions of the pair of first coupling members30, a pair of left and right front fixing members50that fix the front end portions of the pair of first coupling members30to the stud bolts11erected from the top portions4bof the pair of strut towers4via fastening members23, and a pair of left and right rear fixing members60, connecting the rear end portions of the pair of first coupling members30and the left and right end portions of the second coupling members40, that fasten and fix the connection portions thereof to the mounting brackets8via the stud bolts9and the tightening members. The main material of the first coupling members30and the second coupling member40is carbon fiber reinforced plastic (CFRP) in which a reinforcing material (for example, carbon fiber) is impregnated with a synthetic resin (for example, thermosetting epoxy synthetic resin). Carbon fiber includes a fiber bundle (tow) in which a predetermined number of single fibers continuously extending uniformly from one end to the other end in the longitudinal direction of the first coupling members30and the second coupling member40is bundled. The front fixing members50and the rear fixing members60are made of an aluminum alloy material. Accordingly, the front fixing members50and the rear fixing members60have bending rigidity and torsional rigidity that are larger than in the first coupling members30and the second coupling member40. The plate materials of the first coupling members30and the second coupling member40include three types of layered portions. As illustrated inFIG.18, the first coupling members30and the second coupling member40have a middle layer portion L1disposed in the middle portion in the thickness direction, main body layer portions L2that sandwich the middle layer portion L1, and surface layer portions L3that cover the surfaces of the main body layer portions L2on respective sides of the middle layer portion L1. The surface layer portions L3provide corrosion resistance (electrolytic corrosion resistance). The middle layer portion L1is a fiber reinforced plastic layer having an orientation of 90° in which the carbon fibers extend orthogonally to the longitudinal direction. The main body layer portions L2are each a fiber reinforced plastic layer having an orientation of 0° in which the carbon fibers described above extend in the longitudinal direction. The surface layer portions L3area each a glass fiber reinforced plastic (GFRP) layer in which woven glass fibers are impregnated with a synthetic resin. The volume ratio (L1to L2to L3) of these layers are set to, for example, 7 to 80 to 13. In other words, the volume ratio of L1to L3may be within a same order of magnitude and between L1to L2may be an order of magnitude. Next, the first coupling member30will be described. As illustrated inFIGS.8,9,14, and15, the first coupling member30includes a first coupling outer member31that has a substantially U-shaped cross section orthogonal to the longitudinal direction and a first coupling inner member32, forming a closed cross section C1 extending in the longitudinal direction in cooperation with the first coupling outer member31in an intermediate portion in the longitudinal direction, that has a substantially U-shaped cross section. The first coupling outer member31has open cross sections in both end portions in the longitudinal direction. The closed cross section C1 is asymmetric with respect to a middle line C of the cross section orthogonal to the longitudinal direction in the cross section. Accordingly, when a bending load is input to the first coupling member30, the bending load is converted to torsional displacement of the first coupling member30. The first coupling outer member31includes an upper wall portion31sand a pair of side wall portions31t, extending downward from both end portions parallel to the longitudinal direction of the upper wall portion31s. Of each of the side wall portions31t, the front portion and the rear portion in the longitudinal direction are set to have larger widths (vertical dimensions) than the intermediate portion. Openings31aand31bare formed in the front portions of the pair of side wall portions31tin the order from the front and openings31cand31dare formed in the rear portions of the pair of side wall portions31tin the order from the front. An opening31pis formed at the position corresponding to the opening31ain the front portion of the upper wall portion31sand an opening31qis formed at the position corresponding to the opening31din the rear portion of the upper wall portion31s. A bent portion31xthat projects upward and extends in the left-right direction is formed in an intermediate portion of the rear side of the upper wall portion31s. The first coupling inner member32may be shorter in the longitudinal dimension than the first coupling outer member31. The first coupling inner member32includes an upper wall portion32sand a pair of side wall portions32t, extending downward from both end portions that are parallel to the longitudinal direction of the upper wall portion32s. Of each of the side wall portions32t, the front portion and the rear portion in the longitudinal direction are set to have larger widths than the intermediate portion. Openings32bare formed at positions corresponding to the openings31bin the front portions of the pair of side wall portions32tand openings32care formed at positions corresponding to the openings31cin the rear portions of the pair of side wall portions32t. A bent portion32xthat projects upward and extends in the left-right direction is formed at a position corresponding to the bent portion31xin an intermediate portion of the rear side of the upper wall portion32s. Adjusting members21having a substantially U-shaped cross section are disposed in both end portions in the longitudinal direction of the first coupling inner member32. Next, the second coupling member40will be described. As illustrated inFIGS.16and17, the second coupling member40may be made of the same material as the first coupling member30and may include a second coupling outer member41having a substantially U-shaped cross section orthogonal to the longitudinal direction and a second coupling inner member42, having a substantially U-shaped cross section, that forms a closed cross section C2 extending in the longitudinal direction in an intermediate portion in the longitudinal direction in cooperation with the second coupling outer member41. The second coupling outer member41may have open cross sections in both end portions in the longitudinal direction. The closed cross section C2 is symmetric with respect to the middle line of the cross section orthogonal to the longitudinal direction in the cross section and has a substantially trapezoidal shape. This increases the bending rigidity of the second coupling member40. The second coupling outer member41may include an upper wall portion41sand a pair of side wall portions41textending downward from both end portions parallel to the longitudinal direction of the upper wall portion41s. Of each of the side wall portions41t, the left portion and the right portion in the longitudinal direction are set to have larger widths than an intermediate portion. An opening41aon the outer side in the vehicle width direction and an opening41bon the inner side in the vehicle width direction are formed in the right portion and the left portion of the pair of side wall portions41t, respectively, and openings41pare formed at positions corresponding to the opening41ain the right end portion and the left end portion of the upper wall portion41s(seeFIG.6). As illustrated inFIG.17, the second coupling inner member42may be shorter in the longitudinal direction than the second coupling outer member41. The second coupling inner member42may have an upper wall portion42sand a pair of side wall portions42textending downward from both end portions parallel to the longitudinal direction of the upper wall portion42s. Of the side wall portions, the front portion and the rear portion in the longitudinal direction may have larger widths than an intermediate portion. In the front portions of the pair of the side wall portions, openings42bmay correspond to the opening41b(seeFIG.6). The adjusting members22having a substantially U-shaped cross section may be disposed in both end portions in the longitudinal direction of the second coupling inner member42. Next, the front fixing member50will be described. As illustrated inFIGS.10,11, and15, the front fixing member50may include a front fixing outer member51that is substantially hat-shaped in a cross section orthogonal to the longitudinal direction and a front fixing inner member52, having substantially U-shaped cross section, that forms a closed cross section C3 extending in the longitudinal direction in the front portion in the longitudinal direction in cooperation with the front fixing outer member51. The front fixing outer member51may be shorter in the longitudinal direction than the front fixing inner member52. The closed cross section C3 of the front fixing member50may extend in the longitudinal direction and correspond to an open cross section region S1b of the first coupling member30. An open cross section of the front fixing member50may continue to the rear side of the closed cross section C3 and correspond to a closed cross section region S1a of the first coupling member30. The front fixing outer member51may include an upper wall portion51sand a pair of side wall portions51textending downward from both end portions parallel to the axis of the upper wall portion51sand then extending away from the axis. In the vertical portions of the side wall portions51tcorresponding to wall portions that form the closed cross section, openings51mand openings51amay be formed in the order from the front. In the horizontal portions that correspond to the flange portions, openings51eare formed. In the upper wall portion51s, an opening51nand an opening51pare formed at positions corresponding to the openings51mand the openings51a, respectively. It should be noted that stud bolts11are inserted into the openings51e. The front fixing inner member52may have an upper wall portion52sand a pair of side wall portions52textending downward from both end portions of the upper wall portion52sthat are parallel to the longitudinal direction. Each of the side wall portions52tmay have an opening52mthat corresponds to the opening51m, an opening52athat corresponds to the opening51a, and an opening52bthat corresponds to the opening31b(opening32b) behind the opening52a. An opening52nmay correspond to the opening51nin front of the upper wall portion52s. Next, the rear fixing member60will be described. As illustrated inFIGS.12,13,15, and17, the rear fixing member60may include a rear fixing outer member61that has a substantially U-shaped cross section orthogonal to the longitudinal direction and a rear fixing inner member62having a substantially U-shaped cross section that forms a closed cross section C4 extending in the longitudinal direction in cooperation with the rear fixing outer member61. The rear fixing outer member61may be shorter in the longitudinal direction than the rear fixing inner member62. The outer portion in the vehicle width direction of the closed cross section C4 of the rear fixing member60may correspond to the open cross section region S1b of the first coupling member30and the inner portion in the vehicle width direction of the closed cross section C4 of the rear fixing member60may correspond to an open cross section region S2b of the second coupling member40. In addition, the open cross section outside in the vehicle width direction of the rear fixing member60may correspond to the closed cross section region S1a of the first coupling member30and the open cross section inside in the vehicle width direction of the rear fixing member60may correspond to a closed cross section region S2a of the second coupling member40. The rear fixing outer member61has an upper wall portion61sand a pair of side wall portions61textending downward from both end portions parallel to the longitudinal direction of the upper wall portion61s. Each of the side wall portions61tmay have an opening61dcorresponding to the opening31doutside in the vehicle width direction and an opening61acorresponding to the opening41ainside in the vehicle width direction. The upper wall portion61smay have an opening61qcorresponding to the opening31q, an opening61pcorresponding to the opening41p, and a stud hole61rdisposed in an intermediate portion. The rear fixing inner member62may have an upper wall portion62sand a pair of side wall portions62textending downward from both end portions parallel to the longitudinal direction of the upper wall portion62s. In each of the side wall portions62t, an opening62ccorresponding to the opening31c, an opening62dcorresponding to the opening31d, an opening62acorresponding to the opening41a, and an opening62bcorresponding to the opening41bmay be formed in the order from the outside in the vehicle width direction. A stud hole62rcorresponding to the stud hole61ris provided in the upper wall portion62s. The stud bolts9are inserted into the openings61rand62r. As illustrated inFIGS.12and13, the end portion inside in the vehicle width direction of the rear fixing outer member61and the end portion inside in the vehicle width direction of the rear fixing inner member62may be substantially parallel to each other in plan view. As illustrated inFIG.17, boundary portions B1 between the end portions inside in the vehicle width direction of the rear fixing outer member61and the second coupling outer member41is disposed outside in the vehicle width direction of boundary portions B2 between the end portions inside in the vehicle width direction of the rear fixing inner member62and the second coupling inner member42in plan view. As illustrated inFIG.7, the pair of left and right boundary portions B1 (B2) are inclined. When the cowl portion3is displaced in the vertical direction like a bow with the strut tower bar20attached to the vehicle body, the boundary portion B1 (B2) intersects a neutral axis A at a predetermined angle θ. The distance between the pair of boundary portions B1 (B2) is formed so that a distance D1 between the boundary portions on the rear side closest to the cowl portion3is the shortest and a distance D2 between the boundary portions on the front side farthest from the cowl portion3is the longest. It should be noted that the neutral axis A is a line in which the neutral plane of the second coupling member40intersects the cross section orthogonal to the longitudinal direction. Next, the assembly process of the strut tower bar20will be described. As illustrated inFIG.19, in the first coupling member30, after the openings31band31care aligned with the opening32band32c, respectively, and then the first coupling inner member32is fitted and fixed to the first coupling outer member31with an adhesive so as to form the closed cross section C1. A pair of adjusting members21are disposed on one end side and the other end side in the longitudinal direction of the first coupling inner member32, respectively. An opening21pmay be formed in the upper wall portion of each of the adjusting members21and openings21aare formed in the side wall portions of each of the adjusting members21. The pair of adjusting members21may be positioned so that the openings21acorrespond to the openings31aand31dof the first coupling outer member31and the openings21pcorresponds to the openings31pand31q. The second coupling outer member41may be fitted and fixed to the second coupling inner member42to form the closed cross section C2 in substantially the same procedure. The pair of the adjusting members22may disposed on a first end side and a second end side in the longitudinal direction of the second coupling inner member42, respectively. The opening22pmay be formed in the upper wall portion of each of the adjusting members22and openings22amay be formed in the side wall portions of each of the adjusting members22. The pair of adjusting members22may be positioned so that the openings22acorrespond to the openings41aof the second coupling outer member41and the openings22pcorrespond to the openings41p. The front fixing outer member51may cover the end portion of the first coupling outer member31from above and the front fixing inner member52may cover the adjusting members21and the first coupling inner member32from below, so that the front fixing outer member51and the front fixing inner member52form the closed cross section C3 and the open cross section. Since the front fixing member50surrounds the first coupling member30from the outer circumference, the front fixing member50has a larger cross-sectional area than the first coupling member30and has a larger moment of inertia of area and a larger polar moment of inertia of area than the first coupling member30. The opening51nmay coincide with the opening52nand the opening51mmay coincide with the opening52m. Since the openings51aand51pof the front fixing outer member51and the openings52aand52bof the front fixing inner member52are fixed to the first coupling member30via screws or the like, the openings are equivalent to fixing portions. The openings51a,51p, and52amay correspond to the open cross section region S1b of the first coupling member30and the opening52bmay correspond to the closed cross section region S1a of the first coupling member30. As illustrated inFIG.15, the strut tower bar20has a thickness corresponding to the two plates (the first coupling outer member31and the first coupling inner member32) in an intermediate portion in the longitudinal direction of first coupling member30, has a thickness corresponding to the three plates (the first coupling outer member31, the first coupling inner member32, and the front fixing inner member52) in front of this portion, and has a thickness corresponding to the four plates (the front fixing outer member51, the first coupling outer member31, the first coupling inner member32, and the front fixing inner member52) further in front of this portion. In addition, the strut tower bar20has a thickness corresponding to the four plates (the front fixing outer member51, the first coupling outer member31, the adjusting member21, and the front fixing inner member52) further in front of this portion, has a thickness corresponding to the three plates (the front fixing outer member51, the first coupling outer member31, and the front fixing inner member52) further in front of this portion, and has a thickness corresponding to the two plates (the front fixing outer member51and the front fixing inner member52) further in front of this portion. The rear fixing outer member61may cover the end portion of the first coupling outer member31from above and the rear fixing inner member62may cover the adjusting member25and the first coupling inner member32from below, so that the rear fixing outer member61and the rear fixing inner member62form the closed cross section C4 and the open cross section. Since the rear fixing member60surrounds the first coupling member30from the outer circumference, the rear fixing member60has a larger cross-sectional area than the first coupling member30and has a larger moment of inertia of area and a larger polar moment of inertia of area than the first coupling member30. Since the openings61dand61qof the rear fixing outer member61and the openings62cand62dof the rear fixing inner member62are fixed to the first coupling member30via screws or the like, the openings are equivalent to fixing portions. As illustrated inFIG.15, the strut tower bar20has a thickness corresponding to the two plates (the first coupling outer member31and the first coupling inner member32) in an intermediate portion in the longitudinal direction of first coupling member30, has a thickness corresponding to the three plates (the first coupling outer member31, the first coupling inner member32, and the rear fixing inner member62) further behind this portion, and has a thickness corresponding to the four plates (the rear fixing outer member61, the first coupling outer member31, the adjusting member25, and the rear fixing inner member62) further behind this portion. The strut tower bar20has a thickness corresponding to the three plates (the rear fixing outer member61, the first coupling outer member31, and the rear fixing inner member62) further behind this portion and has a thickness corresponding to the two plates (the rear fixing outer member61and the rear fixing inner member62) further behind this portion. In addition, the rear fixing outer member61covers the end portion of the second coupling outer member41from above and the rear fixing inner member62covers the adjusting member22and the second coupling inner member42from below, so that the rear fixing outer member61and the rear fixing inner member62form the closed cross section C4 and the open cross section. Since the rear fixing member60surrounds the second coupling member40from the outer circumference, the rear fixing member60has a larger cross-sectional area than the second coupling member40and has a larger moment of inertia of area and a larger polar moment of inertia of area than the second coupling member40. Since the openings61aand61qof the rear fixing outer member61and the openings62aand62bof the rear fixing inner member62are fixed to the second coupling member40via screws or the like, the openings are equivalent to fixing portions. As illustrated inFIG.17, the strut tower bar20has a thickness corresponding to the two plates (the second coupling outer member41and the second coupling inner member42) in an intermediate portion in the longitudinal direction of the second coupling member40, has a thickness corresponding to the three plates (the second coupling outer member41, the second coupling inner member42, and the rear fixing inner member62) further outside in the vehicle width direction, and has a thickness corresponding to the four plates (the rear fixing outer member61, the first coupling outer member31, the second coupling inner member42, and the rear fixing inner member62) further outside in the vehicle width direction. The strut tower bar20has a thickness corresponding to the four plates (the rear fixing outer member61, the first coupling outer member31, the adjusting member27, and the rear fixing inner member62) further outside in the vehicle width direction, has a thickness corresponding to the three plates (the rear fixing outer member61, the first coupling outer member31, and the rear fixing inner member62) further outside in the vehicle width direction, and has a thickness corresponding to the two plates (the rear fixing outer member61and the rear fixing inner member62) further outside in the vehicle width direction. Accordingly, the thickness of the strut tower bar20only changes within two plates when viewed in the longitudinal direction. In other words, since change in the member rigidity in the longitudinal direction is suppressed in the strut tower bar20, the occurrence of local displacement due to the vehicle body behavior mode can be suppressed. Next, the operation and effect of the front body structure of the vehicle V according to the embodiment will be described. In describing the operation and effect, the deformation behavior of the vehicle V in the membrane vibration mode has been analyzed by computer aided engineering (CAE). First, the basic concept of this analysis will be described. Three types of structural analysis models of carbon fiber reinforced plastic plates with a size of 25 mm×250 mm×2.0 mm were set and the comparative verification of the bending rigidity and the vibration damping performance were performed. Model M1 is configured entirely by a carbon fiber reinforced plastic layer having an orientation of 0°. In addition, 87% of model M2 is a carbon fiber reinforced plastic layer having an orientation of 0° and 13% of model M2 is a glass fiber reinforced plastic layer having an orientation of 0°, on both surfaces of this carbon fiber reinforced plastic layer. As in the embodiment, 7% of model M3 is the carbon fiber reinforced plastic layer having an orientation of 90°, in the center, 80% of model M3 is the carbon fiber reinforced plastic layer having an orientation of 0°, on the surface of the carbon fiber reinforced plastic layer having an orientation of 90°, e.g., on both surfaces of the carbon fiber reinforced plastic layer having an orientation of 90°, and 13% of model M3 is a glass fiber reinforced plastic layer, on the surface of the carbon fiber reinforced plastic layer having an orientation of 0°, e.g., on the surface of each of carbon fiber reinforced plastic layer having an orientation of 0° opposite the carbon fiber reinforced plastic layer having an orientation of 90°. FIG.20illustrates the verification results of bending rigidity. It should be noted that the higher the resonance frequency, the higher the bending rigidity. Model M2 has a lower bending rigidity than the model M1 as illustrated inFIG.20, but measures against electrolytic corrosion are an essential to ensure reliability at the time of implementation. Model M3 has a bending rigidity that is substantially the same as model M2 and can also ensure large load/fatigue durability. FIG.21illustrates the verification results of the vibration damping performance. It should be noted that the higher the modal damping ratio, the larger the amount of accumulated strain energy and the higher the vibration damping effect. As illustrated inFIG.21, model M2 has lower vibration damping performance than model M1, but measures against electrolytic corrosion are essential. Model M3 has vibration damping performance that is substantially the same as model M2 and can also ensure large loads/fatigue durability. As described above, the compatibility between the practical utility and the suppressive effects of the body torsional mode and the membrane vibration mode was confirmed. Since the front body structure of the vehicle V is provided with the first coupling members30that couple the cowl member6to the strut towers4, the vertical displacement of the top portions4bof the strut towers4can be reduced and the body torsional mode can be suppressed by using the bending rigidity of the first coupling members30. Since each of the first coupling members30has the reinforced layer portion made of the fiber reinforced plastic in which fibers are impregnated with the synthetic resin material and the fibers of the reinforced layer portion are oriented so that the fibers extending in a longitudinal direction are more than the fibers extending in directions other than the longitudinal direction, the torsional displacement between the top portions of the strut towers4and the cowl member6can be converted to the torsional displacement of the first coupling members30and the membrane vibration mode is suppressed by increasing the vibration damping capacity of the vehicle V. The torsional displacement of the first coupling member30is converted to strain energy and kinetic energy and this strain energy is temporarily stored in the synthetic resin material of the first coupling members30as shear strain. After that, the stored strain energy (shear strain) is converted to kinetic energy again and part thereof is dissipated as thermal energy. The front body structure includes the pair of left and right first coupling members30that couple the cowl member6to the pair of strut towers4, the second coupling member40that couples the rear end portions of the pair of first coupling members30to each other, the pair of left and right front fixing members50that fix the front end portions of the pair of first coupling members30to the pair of strut towers4, and the pair of left and right rear fixing members that connect, at connection portions, the rear end portions of the pair of first coupling members30to the side end portions of the second coupling member40and fixes the connecting portions to the cowl member6, in which the pair of first coupling members30and the second coupling member40are substantially U-shaped in plan view. Accordingly, the pair of left and right first coupling members30can be configured as a single component and the ease of handling can be improved. In addition, the coupling members21and22can be formed long linearly and the anisotropy tendency of the coupling members21and22can be increased. Since the first coupling member30has the bent portion31x(32x) that projects upward and extends in the vehicle width direction as illustrated by the double chain line inFIG.22, the broken portion of the first coupling member30can be guided to the upper rear at the time of a front collision of the vehicle V, whereby the interference between the components (such as the fuel pipe) disposed around the engine and the broken portion can be avoided. It should be noted that, in the drawing, the solid line illustrates the state before the collision and the double chain line illustrates the state after the collision. Since the first coupling members30and the second coupling member40form the closed cross section C1 extending in the longitudinal direction by fitting the inner members21band22bwith the substantially U-shaped cross sections to the outer members21aand22awith the substantially U-shaped cross sections, the bending rigidity of the first coupling members30and the second coupling member40is increased by the closed cross section C1 formed in cooperation by the outer members and the inner members and the torsional rigidity of the first coupling members30and the second coupling member40can be controlled by the open cross section formed by one of the outer members21aand22a, and the inner members21band22b. Since the closed cross section C1 is asymmetric with respect to the middle line C of the cross section orthogonal to the longitudinal direction in the cross section, even when a bending load is input to the first coupling members30, the bending load can be easily converted to torsional displacement of first coupling members30. Since the ratio of the volume of the reinforced layer portion to the volumes of the first coupling members30and the second coupling member40is set to between 80% or more, e.g., between 80% and 90%, i.e., the volume of the carbon fiber reinforced plastic layer having an orientation of 0° is set to 80% or more, the compatibility between the practical utility and the suppressive effects of the body torsional mode and the membrane vibration mode can be achieved. The suppressive effects of the body torsional mode and the membrane vibration mode may not be sufficient when the ratio of the volume of the reinforced layer portion is less than 80%, but the vehicle body behavior modes can be suppressed while measures against electrolytic corrosion of the first coupling member and the second coupling member are taken when the ratio of the volume of the reinforced layer portion is 80% or more. Next, a modification in which the embodiment is partially changed will be described. 1) Although an example of the first coupling members30and the second coupling member40made of carbon fiber reinforced plastic has been described in the embodiment, the fibers extending in the longitudinal direction only need to be more than the fibers extending in directions other than the longitudinal direction and the material of the first coupling members30and the second coupling member40is not limited to carbon fiber. In addition, an example in which the ratio of the volume of the reinforced layer portion of the fibers extending in the longitudinal direction is 80% has been described, but the ratio of the volume of the reinforced layer portion may be more than 80%. 2) Although an example in which the cross section (closed cross section C2) orthogonal to the longitudinal direction of the second coupling member40is symmetric with respect to the middle line of the cross section and substantially trapezoidal has been described, the vibration damping performance can be further increased by forming the cross section asymmetrically with respect to the middle line of the cross section. 3) Although an example of a strut suspension has been described in the embodiment, at least a cylindrical tower member projecting upward only needs to be provided and the present invention may be applied to a vehicle having a swing arm or multilink suspension. 4) Other than the above, those skilled in the art can practice the present invention as an embodiment in which various changes are made to the embodiment described above or an embodiment in which individual embodiments are combined with each other without departing from the scope of the present invention, and the present invention also includes such changed embodiments. DESCRIPTION OF REFERENCE SIGNS AND NUMERALS 1: dash panel2: front side frame4: strut tower6: cowl member20: strut tower bar30: first coupling member31: first coupling outer member31x: bent portion32: first coupling inner member32x: bent portion40: second coupling member41: second coupling outer member42: second coupling inner member50: front fixing member60: rear fixing memberV: vehicleR: vehicle interiorE: engine roomL1: (orientation 90°) fiber reinforced plastic layerL2: (orientation 0°) fiber reinforced plastic layer | 38,719 |
11858556 | DETAILED DESCRIPTION In the following, references to directions such as forward, reverse, left, right, up, and down are from the point of view of a driver in the cab of the vehicle described, and driving or looking in a forward direction. FIG.1shows a representation of an agricultural machine, in the form of an agricultural or farm tractor10. The tractor10comprises a user cab12to house an operator of the machine, an engine housing (identified generally at14and described further below), a chassis16on which the cab12and engine housing14are mounted, a front axle18carrying front vehicle wheels18A, and a rear axle20carrying rear vehicle wheels20A. Typically, the tractor will be provided with a rear three-point linkage system22, and optionally also a front linkage24, for the attachment of implements. The engine housing14includes a hood assembly14A,14B, also shown inFIG.2. The engine housing14shrouds an engine26, which may be an internal combustion unit, an electric drive, or a hybrid arrangement. The engine26is mounted on the vehicle chassis16, driving the rear axle20(and optionally also the front axle18) via a transmission and driveline. The engine housing hood assembly comprises a first hood panel14A and a second hood panel14B, which is pivotably attached to the first hood panel14A at pivoting mounts28L,28R along the side of the first hood panel14A at a distance D back from the front end of the first hood panel14A. The distance D may suitably be from about 20 to about 50% of the length of the first hood panel14A. The second hood panel14B is movable from a first (closed) position14B1, shown in solid outline inFIG.1, to a second (open) position, shown by dashed outline14B2inFIG.1, relative to the first hood panel14A. As shown inFIG.2andFIG.3, the first hood panel14A is a generally elongate and approximately rectangular (when viewed from above) body having a front end14AF and a rear or back end14AB joined by two (left and right) side portions14AL,14AR. Of course, aerodynamic and/or styling additions may be added to the general shaping of the first hood panel14A, but in general terms it remains a top plate mounted in fixed arrangement over the engine26. The first hood portion14A is mounted in a generally fixed orientation extending across the top of the engine26and a cooling package30(described below) forward of the engine, such that the first hood portion14A extends substantially the whole of the length of the engine housing from the cab12to the front end of the vehicle. The second hood panel14B is generally U-shaped when viewed from above, having a supporting framework supporting a forward radiator guard portion140F including an aperture closed by a dust-resistant mesh or grid140G. At the side of the forward portion140F in the U-shape, side arm portions140L,140R extend rearward towards the cab12up to the attached pivotal mounts28L,28R. As shown inFIG.2andFIG.3, the vertical depth of the second hood panel14B may reduce as the arm portions140L,140R approach the pivotal connections28L,28R. With this arrangement, the second hood panel14B effectively wraps around the front end of the first hood panel14A and extends along the two sides thereof to the left and right pivotal attachments28L,28R. The pivotal attachments28L,28R have a common pivot axis represented by dashed line28X inFIG.3, which common pivot axis traverses the first hood panel14A in spaced-apart relation (i.e., by distance D shown inFIG.2) to the first hood panel front end14AF. The hood assembly includes a first air seal (sealing portion)38disposed between the upper surface of the first hood portion14A and the underside of the second hood panel14B in the vicinity of the front end14AF of the first hood panel14A. The sealing portion38may be in the form of a rubber of plastic grommet or other compressible material configured to prevent ingress of air when the second hood panel is in the first (closed) position relative to the first hood panel14A. The sealing portion may be attached to either of the hood portions, or both of them may comprise sealing strips or bodies to cooperate with the other when the hood assembly is closed. With reference toFIG.3, the second hood portion14B may include a flange48that extends laterally to overlap a peripheral portion at the front end14AF of the first hood portion14A when in the closed position, with the sealing portion38between the flange48and first hood portion14A surfaces as a rubber (or other material) seal attached to either one or each of the first and second hood portions14A,14B. Referring again to the sectional view ofFIG.2, the agricultural vehicle further comprises a bulkhead32mounted to the chassis16between the cab12and the engine26. The rear edge14AB of the first hood panel14A is attached to an upper edge of the bulkhead32, as shown in further detail inFIG.4. The bulkhead32, which conventionally links the engine compartment and cab (although they may be physically separated) serves to reduce heat and/or noise transmission from the engine26to the cab12. The bulkhead32as disclosed may be simpler and lighter than bulkheads for conventional tractors having full-length hinged hoods, for which the full weight of the hood is required to be supported on a hinged mount on the bulkhead. Forward of the engine26and mounted on the chassis16is a vehicle cooling package indicated generally at40, which package includes a fan42and one or more radiators43arranged for the cooling of fluids on the vehicle (e.g., radiator coolant, hydraulic system fluid, brake fluid cooling, and so forth) by airflow driven by the fan42in known fashion. As indicated at arrow A inFIG.2, movement of the second hood panel14B from the first (closed) position to the second (open) position provides external user access to the cooling package40, for example for cleaning and general maintenance purposes. Suitably, the fan42includes a fan shroud44and a further (second) sealing portion46in the form of a rubber or silicon bead for example is disposed between the second hood panel14B and the fan shroud44when the second hood portion14B is in the first position14B1. As noted above, the second hood panel14B is movable from a first (closed) position to a second (open) position, and may be formed as a relatively light weight construction. This movement enables access to the engine bay. The second hood panel14B need not include structural bracing to support the remainder of the hood assembly. As shown inFIG.3, a portion50of the first hood panel14A may have an aperture covered by a dust screen, which aperture is adjacent to the front end14AF and forward of the line28X where the second hood panel14B pivotably attaches at28L,28R. The first hood panel14A suitably includes one or more air guidance channels extending rearward, either from the aperture portion50or from a further point—for example downstream of the cooling fan arrangement40—to carry or direct cooling air to components within the engine housing14. In like manner to aperture50, the second hood panel14B has one or more apertures52covered by respective dust screens. In addition to providing a screened inlet to the vehicle cooling package40, as previously indicated having a portion or portions of the second hood panel14B formed from mesh or other screening material helps to limit the weight and make it still easier for a user to raise the second hood panel14B from the first (closed) to the second (open) position without undue effort and without additional raising mechanisms. With reference toFIGS.1,3, and4, in order to provide a more complete enclosure, the hood assembly may have one or more removable side panels60attached to each of the side portions14AL,14AR of the first hood panel14A. Such side panels60may be attached by bolts, screws, releasable clamps, or any other attachment mechanism, and their purpose is to protect the engine bay from dust and dirt ingress, while still permitting access for maintenance purposes. Because these panels60are attached to the first (fixed) hood portion14A, they do not add to the weight required to be lifted to access the cooling package40through opening the second hood panel14B. Thus, a hood assembly for an agricultural vehicle such as a farm tractor10may include a first hood panel14A and a second hood panel14B pivotably attached to the first hood panel14A and movable from a first closed position to a second open position relative to the first hood panel14A. The first hood panel14A is an elongate body mounted above an engine26of the vehicle and having a front end14AF and a rear end14AB joined by two side portions14AL,14AR. The second hood panel14B wraps around the front end14AF of the first hood panel14A and extends along the two sides thereof to respective pivotal attachments28L,28R. The attachments have a common pivot axis28X which traverses the first hood panel14A in spaced-apart relation to the first hood panel front end14AF. Suitably, the opening second hood panel14B provides maintenance access to a cooling package40of the vehicle without opening or disassembling the remainder of the hood assembly. All references cited herein are incorporated herein in their entireties. If there is a conflict between definitions herein and in an incorporated reference, the definition herein shall control. While the present disclosure has been described herein with respect to certain illustrated embodiments, those of ordinary skill in the art will recognize and appreciate that it is not so limited. Rather, many additions, deletions, and modifications to the illustrated embodiments may be made without departing from the scope of the disclosure as hereinafter claimed, including legal equivalents thereof. In addition, features from one embodiment may be combined with features of another embodiment while still being encompassed within the scope as contemplated by the inventors. Further, embodiments of the disclosure have utility with different and various machine types and configurations. It is also to be understood that the components disclosed here can consists out of one part or multiple parts. When two parts are connected fixedly to each other, this can mean that the two parts are for example welded together or connected in any known way or created via cast molding as one piece. | 10,307 |
11858557 | DETAILED DESCRIPTION FIG.1shows a detail of a support structure1for an instrument panel support of a motor vehicle. In the detail shown, the support structure1comprises two support parts2,3arranged offset to one another. The subject matter of this embodiment concerns the connection of these two support parts2,3, which is why only a detail of the entire support structure is shown for this embodiment and also for the other embodiments in which the support parts are arranged offset to one another. In the embodiment ofFIGS.1and2, the support part3is manufactured from steel, while the support part2is produced from an aluminum alloy. The two support parts2,3are offset in the direction of their longitudinal extension at least in the area of their connection to one another. The offset of the two support parts2,3is transverse to their longitudinal extension. The two support parts2,3are connected by a metal connector4. The metal connector4in the illustrated embodiment consists of a base part5, produced as a stamped and bent part from a steel plate, and a bracket part6. The U-shaped bracket part6is also produced from a steel plate. In the embodiment shown inFIG.1, the end sections of the support parts2,3, which are connected to one another via the metal connector4, are arranged with no or only a slight overlap with one another. On its side facing toward the support part2, the base part5has a curved support part connection surface7. The curvature of the support part connection surface7essentially corresponds to the curvature of the outer lateral surface of the support part2, in such a way that in the arrangement shown inFIG.1a small gap remains to accommodate adhesive. In the embodiment shown, the base part5produced from a steel plate is designed to be closed in its middle section, as can be seen from the closing plate9angled from the leg8. This part can also be designed to be open on one side. The rear side of the base part5of the metal connector4, which cannot be seen inFIGS.1and2, has the same design. The U-shaped bracket part6, whose two legs10,11abut on the outside of the opposite outer sides of the legs8,12of the base part5, is welded to the base part5with the ends of its legs8,12. The bracket part6forms, with the support part connection surface7, a support part enclosure. The end section of the support part2engages in the support part enclosure. A special feature is that the outer lateral surface of the support part2is held therein with the interposition of an adhesive layer13that can be cured in an accelerated manner when heat is supplied. The support structure1is mounted in the detail shown inFIGS.1and2by placing the end of the support part2, which is peripherally coated with adhesive, on the support part receptacle surface7of the base part5, and then the bracket5is placed on the support part2over the side opposite to the support part connection surface7to complete the support part enclosure. The two legs10,11of the bracket part6then abut the outside of the legs8,12and are welded thereto. This welding process takes place immediately after the parts are positioned relative to one another and the adhesive has not yet cured. The heat supply caused by the welding promotes rapid curing of the adhesive. In addition, a certain warping is used by the welding process, due to which the end section of the support part2protruding into the support part enclosure is additionally tensioned therein. The result is a friction-locked and materially bonded connection between the support part2and the metal connector4, which withstands high loads. Since, in the illustrated embodiment, the metal connector4is made of the same material as the second support part3, these two parts3,5are connected to one another by a welded bond. The weld seam is carried out along the lower ends of the legs8,12in the transition to the lateral surface of the support part3. FIG.3shows a further support part14which, like the support part2of the embodiment inFIGS.1and2, is produced from an aluminum alloy. The metal connector15is connected to the support part14in the same way as described for the embodiment ofFIGS.1and2. The embodiment ofFIG.3differs from that ofFIGS.1and2in that the support part14has an opening17in its end face16, into which a tab18of the closing plate19of the base part15engages. The tab18provides an end stop for the support part14as well as a twist-lock device, so that the support part14cannot be rotated around its longitudinal axis in relation to the metal connector15during the curing process. In the embodiment shown inFIG.4, the metal connector20for connecting two support parts21,22as part of a support structure for an instrument panel carrier is a section of an aluminum extruded profile. In this embodiment, one support part21, which engages with its end section in the support part enclosure, is a steel component, while the other support part22is produced from an aluminum alloy. The bracket part23of this embodiment is also manufactured from an aluminum alloy. Therefore, the bracket part23can be welded to the base part24of the metal connector20, and the base part24can be welded to the second support part22. FIG.5shows still another embodiment, in which the base part25of the metal connector26is produced from a steel plate in the manner of a shell. In this embodiment, the bracket part27is also produced from a steel plate. The support part28engaging with its end section in the support part enclosure is manufactured from an aluminum alloy, while the other support part29is made of the same material as the base part25of the metal connector26. In this embodiment, the support part29is welded to a corresponding contact surface of the base part25of the metal connector26at the end face. FIG.6shows an embodiment of a support structure as explained forFIG.5, but with an axial lock with respect to the support part30engaging in the support part receptacle. As better seen from the exploded illustration inFIG.7, the support part30has a bracket part leg opening31into which a leg32of the bracket part33engages so that the lower section of the leg32and also the parallel leg in turn come into contact on the outer wall of the base part in order to be joined thereto. The second support part29is not shown in the embodiment ofFIGS.6and7. At the same time, this measure provides a twist lock. In a further embodiment, not shown in the figures, it is provided that a cutout is introduced into the apex side of the support part facing away from the support part connection surface in its section with which it engages in the support part enclosure, into which an embossing introduced in the apex area of a bracket part engages. This measure also provides a form fit in the longitudinal direction and a twist lock. Still another embodiment of the support structure is shown inFIG.8. In this embodiment, the support part34is made of a different material than the metal connector35, which in turn is made of the same material as the second support part36. The embodiment ofFIG.8makes it clear that the metal connector35having its base part37and its bracket part38can also be designed to connect a support part34which has a cross-sectional geometry that differs from the round shape. In this embodiment, the support part connection surface39of the base part37is designed to be complementary to the side of the support part34facing towards this surface, namely straight. The U-shaped bracket part38is designed to correspond to the rest of the outline geometry of the support part34. Like the base part35of the previous embodiments, the base part37is designed as a half-shell, wherein its open side is visible in the perspective ofFIG.8. Like the support part29of the embodiment shown inFIG.5, the support part36is connected in a materially bonded manner to the base part37with its end face. FIG.9shows a refinement of the embodiment ofFIG.5. Therefore, the statements made regarding the embodiment ofFIG.5apply similarly to the embodiment ofFIG.9. The embodiment ofFIG.9differs from that ofFIG.5in that the base part40of the metal connector41is part of a component of the support structure for an instrument panel support, which has an additional functionality. In this embodiment, the base part40is the upper section of a floor support42, using which the support structure of the instrument panel support is fastened to the floor of a motor vehicle, for example on the tunnel. FIG.10again shows a detail from a support structure for an instrument panel support of a motor vehicle. The metal connector43of the support structure is constructed in two shells. The metal connector43connects the two support parts44,45, wherein the support part45is made of the same material as the metal connector43and the support part44is made of a different material. The two shells of the metal connector43are identified inFIG.10by the reference numerals46,46.1. The connector shell46is described hereinafter. The same explanations also apply to the connector shell46.1, which is arranged mirror-symmetrically to the joining plane to the connector shell46(seeFIG.11). The connector shell46is a component formed from a steel plate. The open side of the connector shell46faces away from the connector shell46.1. The connector shell46is shaped to provide an approximately U-shaped support part connection surface47. The support part connection surface47transitions into bracket part connection surfaces48,49which are angled in relation thereto. The bracket part connection surfaces48,49therefore face in the same direction as the support part connection surface47. Before the two connector shells46,46.1are joined, a small gap is left between the bracket part connection surfaces48,49shown abutting inFIG.11, which is closed by the welding process. In this way, a special compression of the connector shells46,46.1on the lateral surface of the support part44is achieved. In this embodiment, it is provided that measures are taken in order to nevertheless leave an adhesive gap. In the embodiment shown, an adhesive is used which contains glass beads having a diameter which corresponds to the dimension of the adhesive gap provided. The bracket part B in this metal connector43is provided by the upper section of the connector shell46.1inFIG.10. This completes the support part enclosure, as shown in the side view ofFIG.11. The connector shell46.1also has, in addition to its upper section which represents the bracket part B to the support part connection surface47of the connector shell46, a base part section50formed thereon that together with the base part section51of the connector shell46forms the base part of the metal connector43. The metal connector43is designed as forked with respect to its base part sections50,51so that, as in the embodiment ofFIG.1, the legs created in this way can be welded to the lateral surface of the support part45. The statements regarding the carrier shell46having its support part connection surface47apply similarly to the shell46.1, so that the section having the support part connection surface47then represents the bracket part B′ for the relevant section of the connector shell46.1. As in the other embodiments, the support part44is adhesively bonded to the support part connection surface47and the inside of the bracket part B, B′. The adhesive layer is identified by reference numeral52inFIG.11. FIGS.12and13show a refinement of a metal connector53formed from two half-shells54,54.1. The metal connector53is constructed like the metal connector43described forFIGS.10and11. Each of the two connector shells54,54.1of the metal connector53is part of a component having further component parts. Thus, the connector shell54is part of a floor support55, while the connector shell54.1is part of a bracket56. FIG.14shows another support structure57in detail. The end section of a support part58is shown, to which a metal connector59is connected. The metal connector59is constructed in principle like that described forFIG.5, so that the relevant statements apply similarly to the metal connector59. In the case of the support structure57, the support part58is deformed with its end section engaging in the support part enclosure in order to provide a twist lock between the two parts during the curing of the adhesive. An indentation60is introduced into the lateral surface of this section of the support part58so that the leg61of the bracket part62located on this side contacts the support part58at two points spaced apart in the direction of the longitudinal extension of the leg61. These are the marginal limits of the indentation60. The space created by the indentation60can also be used, for example, as a cable feedthrough. FIG.15shows a further support structure63. The support structure63comprises a continuous support part64. In the embodiment shown, two metal connectors65are connected thereto at a distance from one another in the longitudinal extension of the support part64. As in the embodiment ofFIGS.12and13, the metal connectors65are part of a structure, specifically supports66for providing a tunnel support in the embodiment shown inFIG.15. The two supports66are connected to one another by a cross strut67. The metal connectors65correspond, with respect to their section for the connection thereof to the support part64, to the metal connector26of the embodiment shown inFIG.5. The relevant statements therefore also apply to the support structure65. A design as shown in principle for the embodiment inFIG.15can also be used to connect two support parts to one another, for example because these support parts are made of different materials and are arranged aligned with one another. In such a case, the two metal connectors are always connected to one another by a cross strut. In such a design, the cross strut can be part of the two metal connectors. It is also conceivable that with such a design a holder or support is connected to a double metal connector conceived in this way. In the embodiments shown in the figures, although this is not shown in detail, care is taken to ensure that a sufficient adhesive gap remains between the lateral surface of the section of the respective support part that engages in the support part enclosure. This can be achieved, for example, by the adhesive containing glass beads having a diameter of the gap width. These ensure that the adhesive gap is maintained so that it remains constant during curing and the desired tension, induced by the joint bond between the bracket part and the base part produced by the supply of heat, is achieved. Additionally or also alternatively to such a measure, bracket parts can be used which have multiple embossings directed in the direction of the lateral surface of the respective support part. Such a bracket part68is shown by way of example inFIG.16, namely in a perspective view and an enlarged detail of a section in the area of the apex. In the case of the bracket part68, the embossings are provided by quasi-punctiform pressing in of the outside of the bracket part68, so that small protruding spacer knobs69arise on the inside of the bracket part68, as is clear from the detail view. The extent to which these protrude from the inside of the bracket part corresponds to the gap dimension. If a galvanic isolation is provided between the metal connector and the support part, an adhesive having electrically non-conductive particles having a diameter corresponding to the gap dimension, such as glass beads, is preferable. The invention has been described on the basis of numerous exemplary embodiments. The relevant design options for implementing the teaching of the claims are not restrictive. Without departing the scope of the claims, numerous further design options result for a person skilled in the art, without having to describe or show them in greater detail in the context of this disclosure. List of reference numerals1support structure2support part3support part4metal connector5base part6bracket part7support part connection surface8leg9closing plate10leg11leg12leg13adhesive layer14support part15metal connector16end face17opening18tab, end face stop19closing plate20metal connector21support part22support part23bracket part24base part25base part26metal connector27bracket part28support part29support part30support part31bracket part leg opening32leg33bracket part34support part35metal connector36support part37base part38bracket part39support part connection surface40base part41metal connector42floor support43metal connector44support part45support part46, 46.1connector shell47support part connection surface48bracket part connection surface49bracket part connection surface50base part section51base part section52adhesive layer53metal connector54, 54.1connector shell55floor support56bracket57support structure58support part59metal connector60indentation61leg62bracket part63support structure64support part65metal connector66support67cross strut68bracket part69spacer nubB, B′bracket part | 17,013 |
11858558 | In the drawings: Vehicle1, Body frame10, Transverse beam20, Transverse sliding groove21, Longitudinal beam30, Longitudinal sliding groove31, Joint40, Transverse rivet41, Longitudinal rivet42, All-cover joint100, Side top beam101, Side vertical beam102, Plug beam103, First sub-joint110, Second sub-joint120, Transverse beam connection groove130, First transverse beam connecting plate111, First longitudinal beam connecting plate112, First side plate113, Second transverse beam connecting plate121, Second longitudinal beam connecting plate122, Second side plate123, Countersunk screw152, First transverse sliding groove161, Second transverse sliding groove162, First longitudinal sliding groove163, Second longitudinal sliding groove164, First transverse rivet171, Second transverse rivet172, First longitudinal rivet173, Second longitudinal rivet174, First transverse screw rod182, First transverse collar183, Second transverse screw rod185, Second transverse collar186, First longitudinal screw rod192, First longitudinal collar193, Second longitudinal screw rod195, Second longitudinal collar196, First transverse gasket160, Second transverse gasket170, First longitudinal gasket180, Second longitudinal gasket190, Semi-cover joint200, Side waist beam201, Transverse beam connecting plate210, Longitudinal beam connecting plate220, Third transverse rivet231, Fourth transverse rivet232, Third longitudinal rivet233, Fourth longitudinal rivet234, Third transverse sliding groove261, Fourth transverse sliding groove262, Third longitudinal sliding groove263, Fourth longitudinal sliding groove264, Third transverse screw rod hole281, Third transverse screw rod282, Third transverse collar283, Fourth transverse screw rod hole284, Fourth transverse screw rod285, Fourth transverse collar286, Third longitudinal screw rod hole291, Third longitudinal screw rod292, Third longitudinal collar293, Fourth longitudinal screw rod hole294, Fourth longitudinal screw rod295, Fourth longitudinal collar296, Third transverse gasket260, Fourth transverse gasket270, Third longitudinal gasket280, Fourth longitudinal gasket290, Side skin310, Integrated transverse beam320, Top longitudinal beam410, Support profile420, Battery pack sliding groove430, Slot431, Battery pack mounting member440, Bolt441, Battery pack450, Top transverse beam510, Top edge beam520, Outer connecting edge521, Inner connecting edge523, Rack610, Rack edge beam611, Corbel mounting plate620, Mounting surface621, Adjustment gap622, Corbel connecting plate623, Corbel side plate624, Side frame630, Door pillar710, Rack doorframe beam720. DETAILED DESCRIPTION Embodiments of the disclosure are described in detail below, and examples of the embodiments are shown in the accompanying drawings. Wherein the same or similar reference numerals indicate the same or similar elements or elements having the same or a similar function throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the disclosure and cannot be construed as a limitation to the disclosure. In the description of the disclosure, it should be understood that orientation or position relationships indicated by the terms such as “center”, “longitudinal”, “transverse”, “length”, “width”, “thickness”, “on”, “below”, “front”, “back”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inside”, “outside”, “clockwise”, “anticlockwise”, “axial direction”, “radial direction”, and “circumferential direction” are based on orientation or position relationships shown in the accompanying drawings, and are used only for ease and brevity of illustration and description, rather than indicating or implying that the mentioned apparatus or component must have a particular orientation or must be constructed and operated in a particular orientation. Therefore, such terms should not be construed as limiting of the disclosure. In the description of the disclosure, “first feature” and “second feature” may include one feature or a plurality of features. In addition, “a plurality of” refers to two or more than two, and “several” refers to one or more. A body frame10according to the embodiments of the disclosure is described below with reference to the accompanying drawings. As shown inFIG.1toFIG.7, the body frame10according to this embodiment of the disclosure includes a transverse beam20, a longitudinal beam30, and a joint40. As shown inFIG.3, the transverse beam20may extend in a length direction of the body frame10, the longitudinal beam30may extend in a height direction of the body frame10, and the longitudinal beam30is connected with the transverse beam20. The transverse beam20is provided with a transverse sliding groove21extending in a length direction of the transverse beam, and the longitudinal beam30is provided with a longitudinal sliding groove31extending in a length direction of the longitudinal beam. The joint40is disposed at a junction of the transverse beam20and the longitudinal beam30. The joint40is mounted to the transverse beam20by a transverse rivet41and to the longitudinal beam30by a longitudinal rivet42. The transverse rivet41is slidably mated with the transverse sliding groove21, and the longitudinal rivet42is slidably mated with the longitudinal sliding groove31. According to the body frame10in this embodiment of the disclosure, the joint40is mounted to the transverse beam20by the transverse rivet41and to the longitudinal beam30by the longitudinal rivet42. Therefore, using the rivet instead of the bolt connection in related arts avoids loosening caused by an insufficient tightening torque, and effectively improves the structural strength. In addition, the transverse beam20is provided with the transverse sliding groove21, the longitudinal beam30is provided with the longitudinal sliding groove31, the transverse rivet41is slidably mated with the transverse sliding groove21, and the longitudinal rivet42is slidably mated with the longitudinal sliding groove31. By means of the mated connection between the rivets and the sliding grooves, the connection strength can be improved, and the assembly is more convenient, improving the assembly efficiency. Therefore, the body frame10according to this embodiment of the disclosure has a reliable structure without loosening, and can be conveniently assembled. In some specific embodiments of the disclosure, as shown inFIG.1toFIG.5, the transverse beam20includes a side top beam101, the longitudinal beam30includes a side vertical beam102, and the joint40includes an all-cover joint100disposed at a junction of the side top beam101and the side vertical beam102. As shown inFIG.4, the all-cover joint100includes a first sub-joint110and a second sub joint120. A transverse beam connection groove130and a longitudinal beam connection groove (not shown in the figure) are defined by engaging the first sub-joint110and the second sub joint120. The transverse beam connection groove130is adapted to accommodate the side top beam101, and the longitudinal beam connection groove is adapted to accommodate the side vertical beam102. For example, the first sub-joint110and the second sub-joint120are arranged along a width direction of the body frame10. The first sub-joint110and the second sub-joint120are respectively connected with the side top beam101and the side vertical beam102, and the first sub-joint110and the second sub-joint120are in mirror symmetry with respect to a central plane of the body frame10. The transverse beam connection groove130and the longitudinal beam connection groove are in communication with each other and may be disposed vertically. The transverse beam connection groove130extends in a length direction of the body frame10, and has an opening facing the side top beam101. The longitudinal beam connection groove extends in a height direction of the body frame10, and has an opening facing the side vertical beam102. The first sub-joint110and the second sub-joint120are disposed separately. Therefore, the all-cover joint100can have a relatively simple structure and can be conveniently mounted, and die sinking of the single casting (the first sub-joint110and the second sub-joint120) can be easily performed, improving the production efficiency. In addition, the transverse beam connection groove130and the longitudinal beam connection groove are defined by engaging the first sub-joint110and the second sub-joint120. Therefore, the transverse beam connection groove130can be used to accommodate the side top beam101, and the longitudinal beam connection groove can be used to accommodate the side vertical beam102. In this way, the side top beam101and the side vertical beam102can be positioned by the all-cover joint100along a plurality of directions, improving the fatigue endurance, thereby improving the connection stiffness, and reducing the deformation. In addition, since the connection stiffness is ensured, and the all-cover joint100is a separated structure, the first sub joint110and the second sub joint120can be mounted separately and finally engaged and sandwiched at a junction of the side top beam101and the side vertical beam102during assembling. The assembly manner has lower requirements for the vehicle assembly accuracy, is more convenient to operate, and can greatly improve the assembly efficiency. According to some specific embodiments of the disclosure, as shown inFIG.2toFIG.5, the first sub joint110includes a first transverse beam connecting plate111, a first longitudinal beam connecting plate112, and a first side plate113. The first transverse beam connecting plate111is connected with the first longitudinal beam connecting plate112, and the first side plate113is connected with a side of the first transverse beam connecting plate111and the first longitudinal beam connecting plate112away from the second sub-joint120. The second sub joint120includes a second transverse beam connecting plate121, a second longitudinal beam connecting plate122, and a second side plate123. The second transverse beam connecting plate121is connected with the second longitudinal beam connecting plate122, and the second side plate123is connected with a side of the second transverse beam connecting plate121and the second longitudinal beam connecting plate122away from the first sub-joint110. The first transverse beam connecting plate111and the second transverse beam connecting plate121are engaged, the transverse beam connection groove130is defined by the first transverse beam connecting plate and the second transverse beam connecting plate and the first side plate113and the second side plate123. The first longitudinal beam connecting plate112and the second longitudinal beam connecting plate122are engaged, the longitudinal beam connection groove is defined by the first longitudinal beam connecting plate and the second longitudinal beam connecting plate and the first side plate113and the second side plate123. The side top beam101and the side vertical beam102are sandwiched between the first side plate113and the second side plate123. For example, the first transverse beam connecting plate111, the first longitudinal beam connecting plate112, and the first side plate113may be integrally formed. The second transverse beam connecting plate121, the second longitudinal beam connecting plate122, and the second side plate123may be integrally formed. The first sub joint110and the second sub joint120may be made of an aluminum alloy material. The first transverse beam connecting plate111and the first longitudinal beam connecting plate112may be perpendicular to the first side plate113, the second transverse beam connecting plate121and the second longitudinal beam connecting plate122may be perpendicular to the second side plate123, and the first side plate113may be parallel to the second side plate123. In this way, the all-cover joint100is connected with inner and outer surfaces and lower surfaces of the side top beam101and inner and outer surfaces and one side surface of the side vertical beam102, which is stable and reliable, so that the stiffness and strength are improved, and the connection strength of the body frame10is improved. The first sub-joint110and the second sub-joint120are made of an aluminum alloy material, so that the overall weight of a vehicle can be reduced. According to some specific embodiments of the disclosure, as shown inFIG.2andFIG.4, the first side plate113and the second side plate123are provided with countersunk screw holes. The first side plate113is mounted to the side top beam101and the side vertical beam102by countersunk screws152mated with the countersunk head screw holes of the first side plate, and the second side plate123is mounted to the side top beam101and the side vertical beam102by countersunk screws152mated with the countersunk head screw holes of the second side plate. Specifically, the countersunk screw holes are configured with slots, outer surfaces of the countersunk screws152on the first side plate113are flush with an outer surface of the first side plate113, and outer surfaces of the countersunk screws152on the second side plate123are flush with an outer surface of the second side plate123. For example, a side surface of the first side plate113facing the second side plate123and a side surface of the second side plate123facing the first side plate113are respectively connected with two opposite sides of the side top beam101and the side vertical beam102along the width direction of the body frame10. For example, the side surface of the first side plate113facing the second side plate123is connected with a side of the side top beam101and the side vertical beam102facing an outer side of the vehicle, and the side surface of the second side plate123facing the first side plate113is connected with a side of the side top beam101and the side vertical beam102facing an inner side of the vehicle. In this way, the connection area of the all-cover joint100is increased, and the connection strength is improved. According to some embodiments of the disclosure, as shown inFIG.2toFIG.5, the transverse sliding groove21includes a first transverse sliding groove161and a second transverse sliding groove162, and the longitudinal sliding groove31is configured with a first longitudinal sliding groove163and a second longitudinal sliding groove164. The first transverse sliding groove161and the second transverse sliding groove162extend in the length direction of the body frame10, and the first longitudinal sliding groove163and the second longitudinal sliding groove164extend in the height direction of the body frame10. The transverse rivet41includes a first transverse rivet171and a second transverse rivet172, and the longitudinal rivet42includes a first longitudinal rivet173and a second longitudinal rivet174. The first transverse beam connecting plate111is provided with a first transverse screw rod hole. The first transverse rivet171includes a first transverse screw rod182mated with the first transverse screw rod hole and a first transverse collar183riveted on the first transverse screw rod182. The first transverse screw rod182is slidably mated with the first transverse sliding groove161. The second transverse beam connecting plate121is provided with a second transverse screw rod hole. The second transverse rivet172includes a second transverse screw rod185mated with the second transverse screw rod hole and a second transverse collar186riveted on the second transverse screw rod185. The second transverse screw rod185is slidably mated with the second transverse sliding groove162. The first longitudinal beam connecting plate112is provided with a first longitudinal screw rod hole, and the first longitudinal rivet173includes a first longitudinal screw rod192mated with the first longitudinal screw rod hole and a first longitudinal collar193riveted on the first longitudinal screw rod192. The first longitudinal screw rod192is slidably mated with the first longitudinal sliding groove163. The second longitudinal beam connecting plate122is provided with a second longitudinal screw rod hole, and the second longitudinal rivet174includes a second longitudinal screw rod195mated with the second longitudinal screw rod hole and a second longitudinal collar196riveted on the second longitudinal screw rod195. The second longitudinal screw rod195is slidably mated with the second longitudinal sliding groove164. In this way, by means of the connection by using double sliding grooves, not only the connection strength can be improved, but also the deformation and the vibration of the vehicle can be reduced, thereby improving the durability of the vehicle. Moreover, it is convenient to adjust the position of the all-cover joint100during the mounting, and the assembly operation is also more convenient. Further, as shown inFIG.3andFIG.4, a first transverse gasket160between the first transverse screw rod182and the first transverse beam connecting plate111is sleeved on the first transverse screw rod182. The first transverse gasket160is slidably mated with the first transverse sliding groove161. A second transverse gasket170between the second transverse screw rod185and the second transverse beam connecting plate121is sleeved on the second transverse screw rod185. The second transverse gasket170is slidably mated with the second transverse sliding groove162. The first transverse gasket160and the second transverse gasket170may be made of an aluminum alloy material. A first longitudinal gasket180between the first longitudinal screw rod192and the first longitudinal beam connecting plate112is sleeved on the first longitudinal screw rod192. The first longitudinal gasket180is slidably mated with the first longitudinal sliding groove163. A second longitudinal gasket190between the second longitudinal screw rod195and the second longitudinal beam connecting plate122is sleeved on the second longitudinal screw rod195. The second longitudinal gasket190is slidably mated with the second longitudinal sliding groove164. The first longitudinal gasket180and the second longitudinal gasket190may be made of an aluminum alloy material. For example, the first transverse gasket160, the second transverse gasket170, the first longitudinal gasket180, and the second longitudinal gasket190each may be an aluminum alloy sheet. The first transverse gasket, the second transverse gasket, the first longitudinal gasket, and the second longitudinal gasket are provided with a through hole on a center line for mounting the first transverse screw rod182, the second transverse screw rod185, the first longitudinal screw rod192, and the second longitudinal screw rod195. Sizes of the through holes may be increased or decreased according to actual conditions. After the mounting is finished, the first transverse gasket160, the second transverse gasket170, the first longitudinal gasket180, and the second longitudinal gasket190are respectively attached to bottoms of the first transverse sliding groove161, the second transverse sliding groove162, the first longitudinal sliding groove163, and the second longitudinal sliding groove164. The first transverse gasket160, the second transverse gasket170, the first longitudinal gasket180, and the second longitudinal gasket190are made of the aluminum alloy material, so that the weight of the vehicle can be effectively reduced. In addition, when the side top beam101and the side vertical beam102are stressed, loads are applied to contact surfaces of the first transverse gasket160, the second transverse gasket170, the first longitudinal gasket180, and the second longitudinal gasket190with the first transverse sliding groove161, the second transverse sliding groove162, the first longitudinal sliding groove163, and the second longitudinal sliding groove164. Therefore, the stress concentration of the first transverse sliding groove161, the second transverse sliding groove162, the first longitudinal sliding groove163, and the second longitudinal sliding groove164can be reduced, so that the requirements for the strength and the deformation of the body frame can be satisfied. In some specific examples of the disclosure, as shown inFIG.5, the side top beam101is provided with a plug beam103. The plug beam103is inserted into the side vertical beam102, and the plug beam103is fixed to the side vertical beam102by riveting. Therefore, the high stress requirements for a door corner position can be satisfied. Those skilled in the art can understand that, some side vertical beams102are located at a door structure and used as door pillars. Sizes of the all-cover joints100between the side vertical beams102and the side top beams101may be relatively large to satisfy the stress requirements. Some side vertical beams102are located at a window structure and used as window pillars. Sizes of the all-cover joints100between the side vertical beams102and the side top beam101may be relatively small to dispose the all-cover joints100properly according to different stress requirements. In some specific embodiments of the disclosure, as shown inFIG.1,FIG.6, andFIG.7, the transverse beam20includes a side waist beam201, the longitudinal beam30includes a side vertical beam102, and the joint40includes a semi-cover joint200disposed at a junction of the side waist beam201and the side vertical beam102. The semi-cover joint200includes a transverse beam connecting plate210and a longitudinal beam connecting plate220. The transverse rivet41includes a third transverse rivet231and a fourth transverse rivet232disposed on the transverse beam connecting plate210, and the transverse beam connecting plate210is mounted to the side waist beam201by the third transverse rivet231and the fourth transverse rivet232. The longitudinal beam connecting plate220is connected with the transverse beam connecting plate210. The longitudinal rivet42includes a third longitudinal rivet233and a fourth longitudinal rivet234disposed on the longitudinal beam connecting plate220, and the longitudinal beam connecting plate220is mounted to the side vertical beam102by the third longitudinal rivet233and the fourth longitudinal rivet234. A central axis of the third transverse rivet231and a central axis of the third longitudinal rivet233are located in a first plane, and a central axis of the fourth transverse rivet232and a central axis of the fourth longitudinal rivet234are located in a second plane. The first plane and the second plane are disposed in parallel and perpendicular to the width direction of the body frame10. The third transverse rivet231and the fourth transverse rivet232are disposed on the transverse beam connecting plate210, and the third longitudinal rivet233and the fourth longitudinal rivet234are disposed on the longitudinal beam connecting plate220. Therefore, the transverse beam connecting plate210can be mounted to the side waist beam201by the third transverse rivet231and the fourth transverse rivet232, and the longitudinal beam connecting plate220can be mounted to the side vertical beam102by the third longitudinal rivet233and the fourth longitudinal rivet234. In addition, the first plane where the third transverse rivet231and the third longitudinal rivet233are located and the second plane where the fourth transverse rivet232and the fourth longitudinal rivet234are located are spaced apart from each other and disposed in parallel. Therefore, the semi-cover joint200and a mounting point of the body frame10connect the two planes, so that the stiffness and the strength of the semi-cover joint200are improved, and the torsion resistance is optimized. The third transverse rivet231includes a third transverse screw rod282and a third transverse collar283riveted on the third transverse screw rod282, and the fourth transverse rivet232includes a fourth transverse screw rod285and a fourth transverse collar286riveted on the fourth transverse screw rod285. The third longitudinal rivet233includes a third longitudinal screw rod292and a third longitudinal collar293riveted on the third longitudinal screw rod292, and the fourth longitudinal rivet234includes a fourth longitudinal screw rod295and a fourth longitudinal collar296riveted on the fourth longitudinal screw rod295. In some specific examples of the disclosure, as shown inFIG.6andFIG.7, the transverse sliding groove21includes a third transverse sliding groove261and a fourth transverse sliding groove262, and the longitudinal sliding groove31includes a third longitudinal sliding groove263and a fourth longitudinal sliding groove264. The transverse beam connecting plate210is provided with a third transverse screw rod hole281and a fourth transverse screw rod hole284. The third transverse screw rod282passes through the third transverse screw rod hole281and is slidably mated with the third transverse sliding groove261. The fourth transverse screw rod285passes through the fourth transverse screw rod hole284and is slidably mated with the fourth transverse sliding groove262. The longitudinal beam connecting plate220is provided with a third longitudinal screw rod hole291and a fourth longitudinal screw rod hole294. The third longitudinal screw rod292passes through the third longitudinal screw rod hole291and is slidably mated with the third longitudinal sliding groove263. The fourth longitudinal screw rod295passes through the fourth longitudinal screw rod hole294and is slidably mated with the fourth longitudinal sliding groove264. The parts of the third transverse screw rod282, the fourth transverse screw rod285, the third longitudinal screw rod292, and the fourth longitudinal screw rod295respectively exposed from the third transverse collar283, the fourth transverse collar286, the third longitudinal collar293, and the fourth longitudinal collar296are required to be minimized to reduce the operation space and facilitate the designing of interior trim of the vehicle. The third transverse sliding groove261and the fourth transverse sliding groove262extend in the length direction of the body frame10and are spaced apart from each other along the width direction of the body frame10. The third longitudinal sliding groove263and the fourth longitudinal sliding groove264extend in the height direction of the body frame10and are spaced apart from each other along the width direction of the body frame10. In some specific examples of the disclosure, as shown inFIG.5andFIG.6, a third transverse gasket260between the third transverse screw rod282and the transverse beam connecting plate210is sleeved on the third transverse screw rod282. The third transverse gasket260is slidably mated with the third transverse sliding groove261. A fourth transverse gasket270between the fourth transverse screw rod285and the transverse beam connecting plate210is sleeved on the fourth transverse screw rod285. The fourth transverse gasket270is slidably mated with the fourth transverse sliding groove262. A third longitudinal gasket280between the third longitudinal screw rod292and the longitudinal beam connecting plate220is sleeved on the third longitudinal screw rod292. The third longitudinal gasket280is slidably mated with the third longitudinal sliding groove263. A fourth longitudinal gasket290between the fourth longitudinal screw rod295and the longitudinal beam connecting plate220is sleeved on the fourth longitudinal screw rod295. The fourth longitudinal gasket290is slidably mated with the fourth longitudinal sliding groove264. For example, the third transverse gasket260, the fourth transverse gasket270, the third longitudinal gasket280, and the fourth longitudinal gasket290each may be a metal sheet, such as an aluminum alloy sheet. The third transverse gasket, the fourth transverse gasket, the third longitudinal gasket, and the fourth longitudinal gasket each may be provided with a through hole on a center line for mounting the third transverse screw rod282, the fourth transverse screw rod285, the third longitudinal screw rod292, and the fourth longitudinal screw rod295. Sizes of the through holes may be increased or decreased according to actual conditions. After the mounting is finished, the third transverse gasket260, the fourth transverse gasket270, the third longitudinal gasket280, and the fourth longitudinal gasket290are respectively attached to bottoms of the third transverse sliding groove261, the fourth transverse sliding groove262, the third longitudinal sliding groove263, and the fourth longitudinal sliding groove264. The third transverse gasket260, the fourth transverse gasket270, the third longitudinal gasket280, and the fourth longitudinal gasket290use the aluminum sheets, so that the weight of the vehicle can be effectively reduced. In addition, when the side waist beam201and the side vertical beam102are stressed, loads are applied to contact surfaces of the third transverse gasket260, the fourth transverse gasket270, the third longitudinal gasket280, and the fourth longitudinal gasket290with the third transverse sliding groove261, the fourth transverse sliding groove262, the third longitudinal sliding groove263, and the fourth longitudinal sliding groove264. Therefore, the stress concentration of the third transverse sliding groove261, the fourth transverse sliding groove262, the third longitudinal sliding groove263, and the fourth longitudinal sliding groove264can be reduced, so that the requirements for the strength and the deformation of the body frame10can be satisfied. Further, the third transverse rivet231is closer to the longitudinal beam connecting plate220than the fourth transverse rivet232, and the fourth longitudinal rivet234is closer to the transverse beam connecting plate210than the third longitudinal rivet233. The third transverse rivet231and the fourth transverse rivet232may be staggered from each other, and the third longitudinal rivet233and the fourth longitudinal rivet234may be staggered from each other. By virtue of different riveting sequences, the semi-cover joint200can be of a smaller size, the designing of interior trim of the vehicle can be more convenient, and the space and the time required for mounting can be reduced. According to the above embodiments of the disclosure, the all-cover joint100and the semi-cover joint200are disposed. Therefore, corresponding adjustments may be performed according to different stress positions and the strength of the riveting structure to achieve the optimal cost performance. In some specific embodiments of the disclosure, as shown inFIG.1, the body frame10further includes a side skin310and an integrated transverse beam320. The transverse beam under the skin, a transverse beam for mounting a seat, and a transverse beam of an in-vehicle sealing plate are integrated as a whole by the integrated transverse beam320, and the integrated transverse beam320is connected with a lower edge of the side skin310. That is to say, by increasing a size in a height direction of the side skin310, the transverse beam under the skin, the transverse beam for mounting the seat, and the transverse beam of the in-vehicle sealing plate are integrated as a whole to form the integrated transverse beam320. Therefore, the structure is simplified, the rigidity of the vehicle is improved, and the internal sealing of the vehicle is enhanced. The side skin310not only provides an exterior decoration, but also shares the overall stress of the frame, effectively improving the rigidity of the vehicle. In some specific embodiments of the disclosure, as shown inFIG.8andFIG.9, the body frame10further includes a plurality of top longitudinal beams410and a plurality of support profiles420. The plurality of top longitudinal beams410are disposed along the length direction of the body frame10and spaced apart from each other along the width direction of the body frame10. Each support profile420is disposed on the corresponding top longitudinal beam410. The support profile420is configured with a battery pack sliding groove430. A battery pack mounting member440is slidably mated with the battery pack sliding groove430. A battery pack450is mounted to the support profile420of the top longitudinal beam410by the battery pack mounting member440. Therefore, the support profile420of the top longitudinal beam410is configured with the battery pack sliding groove430according to an expected mounting position of the battery pack450, the mounted structure of the battery pack450is integrated with the top longitudinal beam410, so that the structure and the space of the top longitudinal beam410are properly utilized. Therefore, the battery pack mounting bracket in related arts can be omitted, simplifying the structure, reducing the number of connecting components and the process steps, improving the production efficiency, and reducing the manufacturing costs and the weight of the body frame10. In addition, the battery pack mounting bracket and the top frame in related arts are connected by a bolt, a torque of the bolt is easy to attenuate, and stress is easy to concentrate at a fixing point. The risk is reduced for the body frame10according to this embodiment of the disclosure. Moreover, the battery pack450is mounted by the battery pack sliding groove430, and the battery pack mounting member440is slidable in the battery pack sliding groove430, reducing the operation difficulty, thereby facilitating the control of the assembly accuracy and the mounting point accuracy. As shown inFIG.8andFIG.9, at least one end of the battery pack sliding groove430is open, and the battery pack sliding groove430has a slot431. A width of the slot431is less than a width of the battery pack sliding groove430. Specifically, at least one of two length ends of the battery pack sliding groove430is unclosed, that is to say, one end in a length direction of the battery pack sliding groove430is closed and the other end is open, or both ends in the length direction of the battery pack sliding groove430are open. An opening of the slot431is disposed upward, a length of the slot431is same as the length of the battery pack sliding groove430, and the width of the battery pack sliding groove430is greater than the width of the slot431. The battery pack mounting member440includes a bolt441and a nut (not shown in the figure). A nut of the bolt441is mated with the battery pack sliding groove430, a screw of the bolt441extends from the slot431into the mounted structure of the battery pack450, and the nut is screwed to a stud of the bolt441for locking. Specifically, the bolt441may enter the battery pack sliding groove430from the open end of the battery pack sliding groove430. After entering the battery pack sliding groove430, the stud of the bolt441extends upward from the slot431. Since the width of the slot431is less than the width of the battery pack sliding groove430, the nut of the bolt441is blocked from engagement. The bolt441can only be slid along the battery pack sliding groove430. When the bolt reaches a predetermined position, the nut may be screwed to fasten the battery pack450. Further, the nut of the bolt441is a rectangle or a parallelogram. Therefore, after entering the battery pack sliding groove430, the nut of the bolt441can only be slid along the length of the battery pack sliding groove430, and cannot be rotated, facilitating screwing or unscrewing of the nut. In some specific embodiments of the disclosure, as shown inFIG.1andFIG.10, the body frame10further includes a plurality of top transverse beams510, a top edge beam520, a plurality of side vertical beams102, and a side top beam101. The top edge beam520extends in the length direction of the body frame10and is connected with the plurality of top transverse beams510. The side top beam101extends in the length direction of the body frame10and is connected with the plurality of side vertical beams102. The top edge beam520and the side top beam101are riveted. Specifically, the top edge beam520is configured with an outer connecting edge521and an inner slot located inside the outer connecting edge521, and the side top beam101is configured with an inner connecting edge523and an outer slot located outside the inner connecting edge523. The outer connecting edge521is inserted into the outer slot, and the inner connecting edge523is inserted into the inner slot. That is to say, the outer connecting edge521is located outside the inner connecting edge523, and the outer connecting edge521and the inner connecting edge523are riveted. The top edge beam520and the top transverse beam510are fixed to ensure the accuracy, and the side top beam101and the side vertical beam102are fixed to ensure the accuracy, and then the top edge beam520and the side top beam101are sandwiched and adjusted to an assembly position by tooling and riveted and fixed by blind studs, so as to achieve the purpose of simple assembly operations and controllable accuracy. In some specific embodiments of the disclosure, as shown inFIG.1andFIG.11, the body frame10further includes a rack610, a corbel mounting plate620, and a side frame630. The corbel mounting plate620is mounted to the rack610and has a mounting surface621. An adjustment gap622exists between the mounting surface621and the rack610. The side frame630is mounted to the mounting surface621of the corbel mounting plate620. The mounting surface621is disposed on the corbel mounting plate620, and the adjustment gap622exists between the mounting surface621and the rack610. Therefore, a distance between the mounting surface621and the rack610may be adjusted by adjusting a size of the gap622. In this way, it can be ensured that the mounting surface621of the corbel mounting plate620is located on a same plane, thereby eliminating a gap between the side frame630and the corbel mounting plate620, and ensuring the flatness of the assembly. For example, the corbel mounting plate620is mounted to a side surface along a width direction of the rack610, and the mounting surface621and the side surface of the rack610are spaced apart from each other along the width direction of the rack610to form the adjustment gap622. Therefore, the corbel mounting plate620can be adjusted along the width direction of the rack610, the manufacturing error can be removed during mounting of the corbel mounting plate620, and the mounting surface621of the corbel mounting plate620can be ensured to be flat, thereby ensuring the accuracy of the assembly of the side frame630and the rack610. In addition, since the flatness of the assembly of the side frame630and the rack610has been improved, gaskets and bolt connection are no longer required, reducing the assembly time and the material costs, avoiding subsequent torque attenuation, and significantly improving the strength of the integrated structure and the driving safety. Specifically, the rack610has a rack edge beam611, and the side frame630has a side vertical beam102. The corbel mounting plate620includes a corbel connecting plate623and two corbel side plates624. The two corbel side plates624are respectively connected with two opposite sides of the corbel connecting plate623. The two corbel side plates624are disposed in parallel and perpendicular to the corbel connecting plate623, and a cross section of the corbel mounting plate620is configured as a U shape. The two corbel side plates624are mounted to the rack edge beam611, the mounting surface621is formed on a surface of the corbel connecting plate623facing away from the rack edge beam611, the adjustment gap622is formed between the corbel connecting plate623and the rack edge beam611, and the side vertical beam102is mounted to the mounting surface621of the corbel connecting plate623, for example, the side vertical beam102is riveted on the mounting surface621of the corbel connecting plate623. Therefore, by means of the corbel mounting plate620, the body frame10can be adjusted along the width direction. In a low-floor area in the middle of the vehicle, a relatively small corbel mounting plate620is welded to the rack edge beam611, not only ensuring the structural strength, but also ensuring the mounting flatness of the floor surface inside the vehicle. In some specific embodiments of the disclosure, as shown inFIG.1andFIG.12, the body frame10further includes door pillars710, a rack edge beam611, and a rack doorframe beam720. The door pillars710and the rack edge beam611are riveted. The rack doorframe beam720is connected with the rack edge beam611and located between the door pillars710. Specifically, the rack doorframe beam720is first welded and fixed to the rack edge beam611, and then the door pillars710and the rack edge beam611are riveted and fixed by a blind stud. The structure not only reduces the occupied space under a doorframe, but also ensures the connection strength of the doorframe. A vehicle1according to an embodiment of the disclosure is described below. The vehicle1may be a large bus. As shown inFIG.13, the vehicle1according to this embodiment of the disclosure includes the body frame10according to the above embodiments of the disclosure. The body frame10may be an aluminum alloy part to achieve a light weight. By means of the body frame10according to the above embodiments of the disclosure, the vehicle1according to this embodiment of the disclosure achieves a stable and reliable structure and high production efficiency. Other configurations and operations of the vehicle1according to the embodiments of the disclosure are known to those of ordinary skill in the art and will not be described in detail herein. In the description of this specification, description of reference terms such as “a specific embodiment” or “a specific example”, means including specific features, structures, materials, or features described in the embodiment or example in at least one embodiment or example of the disclosure. In this specification, exemplary descriptions of the afore-mentioned terms do not necessarily refer to the same embodiment or example. Although the embodiments of the disclosure have been shown and described, a person of ordinary skill in the art should understand that various changes, modifications, replacements and variations may be made to the embodiments without departing from the principles and spirit of the disclosure, and the scope of the disclosure is as defined by the appended claims and their equivalents. | 42,434 |
11858559 | The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. With reference toFIGS.1and2, a vehicle10such as a pick-up truck, for example, is illustrated. The vehicle10includes a cab12, a plurality of wheels102,104,106,108, and a cargo bed or component14. The cargo bed14extends from the cab12and includes a plurality of side walls16, a tailgate18, a panel, which in this form is a floor panel or bottom surface20, and a plurality of ridges22. The plurality of side walls16extend from an aft end of the cab12. The tailgate18is coupled to the side walls16and is pivotable about a horizontal axis (not shown) between a closed position and an open position. When the tailgate18is in the closed position, the tailgate18cooperates with the side walls16to define a partially enclosed cargo compartment24. When the tailgate18is in the open position, the side walls16define an opening to the cargo compartment24. Cargo26such as 2×4s, piping, tubing and other materials to be transported from a facility to a jobsite or dwelling, for example, may be stored and transported in the cargo compartment24. In some forms, the vehicle10may be a cargo van (not shown), among other types of vehicles, by way of example. With additional reference toFIGS.3and4a, in the example illustrated, the ridges22extend in a longitudinal direction relative to a length of the floor panel20. In another example, one or more of the ridges22may extend in a lateral direction (not shown) relative to the length of the floor panel20. In yet another example, one or more of the ridges22extend at an oblique angle relative to the length of the floor panel20. The ridges22are configured to provide rigidity to the floor panel20. In one form, the ridges22are stamped into the floor panel20of the cargo bed14. In the example illustrated, one or more of the ridges22extend substantially an entire length of the floor panel20, two or more of the ridges22are spaced apart laterally from each other, and two or more of the ridges22are spaced apart longitudinally from each other. In the example illustrated, the ridges22are integral with and extend above the floor panel20. In another example, the ridges22may be formed as depressions in the floor panel20and therefore extend below the floor panel20rather than extending above the floor panel20as illustrated herein. As shown inFIG.4a, each ridge22includes opposing sides28and an upper profile surface30. In one form, the opposing sides28of one or more of the ridges22are straight, or perpendicular/normal to the floor panel20, and the upper profile surface30is arcuate and convex as shown. In some forms, the upper profile surface30may be flat, concave, or any other suitable shape without departing from the scope of the present disclosure. A plurality of measurement gradations32are formed in and along one or more ridges22of the plurality of ridges22. The plurality of measurement gradations32are configured to facilitate quick and accurate measurement of cargo26located on the floor panel20of the cargo compartment24. For example, the measurement gradations32are spaced apart from each other along a respective ridge22in predetermined increments such that when cargo26is positioned along the respective ridge22, the cargo26can be quickly and accurately measured. In the example illustrated, the measurement gradations32of ridge22aare spaced one (1) foot, or twelve (12) inches, apart. In this way, a user can quickly and accurately determine that a plank31(e.g., 2×4) positioned along the ridge22ais approximately five (5) feet long without the need for additional/separate measuring devices. The measurement gradations32also assist the user in quickly being able to cut cargo located on the floor panel20of the cargo bed14to a predetermined dimension without the need for additional measuring devices (e.g., a tape measure). The opposing sides28of the ridge22being straight also facilitates alignment and cutting of the cargo especially when the cargo includes bends (e.g., flexible piping). It should be understood that different ridges22may include measurement gradations32that are spaced apart from each other in different predetermined increments. For example, the measurement gradations32of ridge22aare spaced one (1) foot apart while the measurement gradations32of another ridge22may be spaced one-half a foot (six (6) inches) apart. In this way, accurate and efficient measurement of different sized cargo can be provided for. In one example, when the tailgate18is in the closed position, the tailgate18is used as a starting point for measuring cargo located along the ridge22in the cargo compartment24. That is, in the example illustrated, an inner surface34of the tailgate18is used as the starting point for measuring the plank31located along the ridge22a(i.e., the plank31is five (5) feet as measured from the inner surface of the tailgate18). In this example, the inner surface34of the tailgate18is located one (1) foot apart from a first measurement gradation32aof the plurality of measurement gradations32of the ridge22a. In another example, at least one T-shaped bead38(FIGS.2and4b) located on the floor panel20(i.e., integral with the floor panel20) opposite an end of the tailgate18is used as a starting point for measuring cargo located along the ridge22in the cargo compartment24. The bead38extends upwardly from the floor panel20and is positioned a predetermined distance from a first measurement gradation32bof the plurality of measurement gradations32of a respective ridge22. It should be understood that in some forms the bead38may be L-shaped or any other suitable shape for acting as a starting point for measuring cargo located along a respective ridge22in the cargo compartment24. In other examples, an inner surface of a vertical wall (i.e., vertical wall located at an end of the cargo bed14opposite the tailgate18) of the cargo bed14is used as a starting point for measuring cargo located along the ridge22in the cargo compartment24. In the example illustrated, the plurality of measurement gradations32are longitudinally aligned with each other and are grooves extending in a lateral direction relative to the ridge22. In another example, the measurement gradations32may be apertures extending through and along the ridge22, for example. In another form, the measurement gradations32may be ribs or bumps extending outwardly from the ridge22, for example. Indicia50(FIG.2) such as numbering or scales may also be located on the floor panel20of the cargo bed14, for example, to further facilitate a user in measuring cargo along the respective ridge22. Such indicia can be printed, stamped, or molded into the floor panel20. In some forms, the indica50may be located on the ridges22of the cargo bed14, instead of, or in addition to, the floor panel20. With reference toFIG.5, a modular bed liner114is illustrated. The modular bed liner114may be incorporated into the cargo bed14of the vehicle10above. The modular bed liner114includes a plurality of side walls116(only one shown in the figure), a floor panel or bottom surface120, and a plurality of ridges122. A measurement grid124is formed on or in the floor panel120and is configured to measure an area of cargo located in the bed liner114, for example. For example, the grid124allows the square footage of cargo such as carpet located in the bed liner114to be quickly and accurately measured without the need for additional measuring devices. The grid124includes grid line128that are spaced predetermined increments from each other. In the example illustrated, the grid lines128are spaced one (1)foot, or twelve (12) inches, apart from each other. In this way, the grid124may also be used to measure cargo such as a plank (e.g., 2×4), for example, oriented laterally or longitudinal in the bed liner114without the need for additional measuring devices. Indicia such as numbering or scales may also be associated with the grid124, for example, to further facilitate a user in measuring cargo in the bed liner114. With reference toFIGS.6and7, cargo components214a,214bare illustrated. The cargo components214a,214bin this form are floor mats or bedliners. In one example, the cargo components214a,214bare located on a floor panel of a cargo bed. In another example, the cargo components214a,214bare located on the floor panel of an occupant cabin of a vehicle. In the example illustrated inFIG.6, the cargo component214acomprises a decorative structure216including a plurality of triangles that are configured to measure angles of objects located thereon. For example, the decorative structure216measures 30-degree angles, 45-degree angles, 60-degree angles, and/or 90-degree angles. In the example illustrated inFIG.7, the cargo component214bcomprises measurement gradations218formed therein and aligned along a length of the cargo component214b. In other examples, the measurement gradations218are formed in the cargo component214band aligned along a width of the cargo component214b. In the example illustrated, the measurement gradations218are lines formed on the cargo component214b. In other examples, the measurement gradations218may be ridges, grooves, or other markings formed in or on the cargo component214b. Indicia such as numbering or scales, in some forms, is associated with the cargo components214a,214b, for example, to further facilitate a user in measuring cargo located thereon. Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about”or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice, material, manufacturing, and assembly tolerances, and testing capability. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure. | 10,778 |
11858560 | DETAILED DESCRIPTION FIGS.1and2illustrate an example trailer20located on a drivable surface22, such as a road or a piece of ground, having a railing assembly24A or24B, respectively. The trailer20, such as a flatbed trailer, includes a support surface26for supporting goods thereon. The support surface26is supported by a frame27on an underside of the support surface26and a plurality of wheels28are rotatable attached to the frame27adjacent an aft portion of the support surface26. The trailer20also includes a trailer attachment point30for attaching the trailer20to a vehicle, such as a semi-truck, for transporting the trailer20between multiple locations. When the trailer20is not attached to the vehicle, at least one support31is lowered to engage the drivable surface22to maintain the support surface26in a suitable orientation for walking on, such horizontal or level with respect to the ground. In the illustrated example, an uppermost one of the straps56is between approximately 48-52 inches (1.2-1.3 meters) from the support surface26. The railing assemblies24A and24B include a plurality of posts, such as at least one ratcheting post32, at least one intermediate post33, at least one connecting post34, and/or at least one ratcheting/connecting post44. In the illustrated example, the plurality of posts32,33,34, and44are interconnected by sets of three horizontally extending straps56arranged vertically from each other. However, more or less than three straps56extending between adjacent posts32,33,34, and44could be used. The straps56can be made from a natural or synthetic fibers to allow the straps to be flexible, durable, and easy to store when not in use. The straps56are lightweight and easier to maneuver compared to metal bars. Additionally, the uppermost strap56is arranged to provide support similar to a hand rail. The straps56can also be surrounded by a brace strap63extending vertically and fastened on opposing ends, such as by a hook and loop closure, to prevent the straps56from separating from each other. Each of the plurality of posts32,33,34, and44attach along edges36of the support surface26. An outer rail38is located along opposing longitudinal edges36of the support surface26. Additionally, it is possible that the outer rail38could wrap around a front side and/or a rear side of the support surface26for accepting a railing assembly24C as shown inFIG.9. As shown inFIG.3, a plurality of post receiving pockets40are defined at least partially by the edge36of the support surface26, the outer rail38, and a plurality of connecting members42. A shim41having a varying thickness can be located in the pocket40to reduce clearance with the posts32,33,34, and44. The plurality of connecting members42extend between the edge36and the outer rail38to secure the outer rail38to the edge36of the support surface26. The outer rail38also protects attachment devices, such as straps or chains, that extend around the edge36of the support surface26for securing goods to the support surface26. As shown inFIGS.1-3, there are multiple post receiving pockets40located around the edge36of the support surface26to provide a variety of attachment locations for the ratcheting posts32, the intermediate posts33, the connecting posts34, and/or the ratcheting/connecting posts44. FIGS.4A-4Cschematically illustrate an interior view of the railing assembly24A shown inFIG.1and provide further detail regarding the straps56extending between the posts32,34, and44. Additionally,FIG.4Dschematically illustrates a side view of the intermediate post33supporting the straps56. The railing assembly24A includes a pair of ratcheting posts32(FIGS.4A and4C) located at a front and rear of the support surface26and a single connecting post34(FIG.4B) located between the pair of ratcheting posts32. In the illustrated example, the ratcheting post32includes a C-shaped channel52that extends vertically from the support surface26. The C-shaped channel52defines a central body portion50. A plurality of strap channels54extend along one edge of the central body portion50. The strap channels54accept at least one of a strap56and/or a ratchet assembly58for tensioning between the ratcheting post32and the connecting post34. In the illustrated example, the ratchet assembly58is rotary operated by a lever handle. The ratchet assembly58allows the strap56to be tensioned such that the upper most strap56will not deflect by more than approximately 12 inches (0.3 meters). The strap channels54include a C-shaped channel at least partially defining a central body portion55that extends in a generally perpendicular direction relative to a longitudinal direction of the central body portion50. The central body portion55of the strap channel54is in the same plane as the central body portion50(FIGS.4A and7). The ratcheting posts32shown inFIGS.4A and4Care also mirror images of each other such that they can be located at opposing ends of the support surface26. In another example, the ratcheting posts32could be identical and rotated relative to each other such that one has the C-shaped channel52facing inward and the other has the C-shaped channel facing outward. A pocket projection59also extends from a proximal end of the C-shaped channel52and is in an overlapping relationship with the C-shaped channel52to be received in the pockets40adjacent the support surface26of the trailer20. In the illustrated examples shown inFIGS.4A,4C, and6, the pocket projection59is fixed relative to the C-shaped channel52and includes a square or rectangular cross-section with rounded corners to facility insertion into the pockets40. The pocket projection59can also include a retainer opening59A for accepting a retainer57, such as a pin or fastener, when the pocket projection59is placed within one of the pockets40. One feature of the retainer57is to prevent the post32from being removed from the pocket40. As shown inFIG.4B, the connecting post34includes three pairs of grommets66with one grommet66of each pair located on opposite sides of a C-shaped channel64. In the illustrated example, distal ends of the straps56opposite the ratchet assembly58include hooks60that engage a corresponding one of the grommets66. The connecting post34includes a central body portion62between opposing sides of the C-shaped channel64. The grommets66are aligned with a corresponding one of the ratchet assemblies58on one of the ratcheting posts32. The grommets66can be bolted or welded to the C-shaped channel64. A pocket projection70extends from a proximal end of the C-shaped channel64to be received in the pockets40adjacent the support surface26, similar to the pocket projections59described above. In the illustrated examples show inFIGS.41B, the pocket projection70is a sleeve fixed relative to the C-shaped channel64and includes a square or rectangular cross-section with rounded corners to facility insertion into the pockets40. The pocket projection70can also include a retainer opening70A for accepting one of the retainers57when the pocket projection70is placed within one of the pockets40. One feature of the retainer57is to prevent the post34from being removed from the pocket40. As shown inFIG.4D, when a distance between the ratcheting posts32and the connecting posts34exceeds a predetermined threshold, such as 20 feet, one of the intermediate posts33may be located between the ratcheting post32and the connecting post34. The intermediate posts33include a plurality of slots35in opposing edges of the C-shaped channel that extend downward at approximately 45 degrees relative to outer edges of the C-shaped channel. The slots35accommodate the straps56spanning a C-shaped channel37(FIG.10B) between opposing edges of the post33. The slots35provide vertical support to the straps56while allowing the straps56to move freely in a lateral or lengthwise direction. One feature of utilizing the ratcheting posts32with the connecting post34is the ability to selectively position the railing assembly24A along the support surface26. For example, the ratcheting posts32could be located at the longitudinal ends of the support surface26or spaced inward from the longitudinal ends of the support surface26depending on the application. Additionally, the straps56between the forward most ratcheting post32and an adjacent connecting post34could be removed to provide access to the support surface26while the remaining straps56are left in place. Furthermore, the connecting post34could be located in any of the pockets40between the ratcheting posts32depending on the application. Moreover, a tie-off lanyard61could be attached to any one of the straps56to be used by a worker on the support surface26. FIGS.5A-5Dillustrate an interior view of the example railing assembly24B shown inFIG.2without illustrating the intermediate posts33described above. The railing assembly24B is similar to the railing assembly24A except where described below or shown in the Figures. Similar numbers will be used between the railing assembly24A and the railing assembly24B to identify the same or similar components. In addition to having the pair of ratcheting posts32, the at least one intermediate post33, and the at least one connecting post34as shown inFIGS.1and4A-4D, the railing assembly24B includes a ratcheting/connecting post44located inward from the pair of ratcheting posts32and adjacent the connecting post34. The ratcheting/connecting post44includes a central body portion46forming a C-shaped channel48with grommets66located adjacent one side similar to the connecting post34and strap channels68located adjacent a second side opposite the first side similar to the ratcheting posts32. The strap channels68are at least partially defined by a central body portion69that extends in a generally perpendicular direction relative to the longitudinal direction of the central body portion46. The ratcheting/connecting posts44provide further flexibility in the number and size of opening formed along the edge36of the support surface26when removing straps56between adjacent posts32,34, or44. Additionally, ratcheting/connecting posts44include a pocket projection72with a retainer opening72A for accepting the retainer57when the pocket projection70is placed within one of the pockets40. One feature of the retainer57is to prevent the post34from being removed from the pocket40. FIG.9illustrates the railing assembly24C along the rear of the trailer26, but the railing assembly24C could also be located along a front of the trailer26as well. The railing assembly24C is similar to the railing assemblies24A and24B except where described above or shown in the Figures. As shown inFIGS.10A-10C, the railing assembly24C includes the ratcheting post32with straps56that extend to a single sided connecting post34A. The single sided connecting post34A is similar to the connecting post34except that the grommets66are only located along a single side of the post34A. However, the connecting post34could still be used with the railing assembly24C but with only one set of the grommets66engaging the straps56. The intermediate post33is located between the ratcheting post32and the connecting post34A and supports the straps56in the slots35. The intermediate post33also includes a pocket projection80having a retainer opening80A for accepting one of the retainers57as described above with the respect to the pocket projection70. Additionally, the railing assembly24C could eliminate the intermediate post33depending on a width of the railing assembly24C. One feature of the above describe railing systems24is the ability to store the railing system24on the vehicle or trailer24without requiring significant storage space or having a significant weight, such as with other prior art systems that can weigh over a thousand pounds which decreases the load carrying capacity of the trailer20. The railing systems24are also customizing in railing length as the user can select the pockets40best suited to provide access to the support surface26. The components in the railing system24are also easily interchangeable and replaceable. Although the different non-limiting examples are illustrated as having specific components, the examples of this disclosure are not limited to those particular combinations. It is possible to use some of the components or features from any of the non-limiting examples in combination with features or components from any of the other non-limiting examples. It should be understood that like reference numerals identify corresponding or similar elements throughout the several drawings. It should also be understood that although a particular component arrangement is disclosed and illustrated in these exemplary embodiments, other arrangements could also benefit from the teachings of this disclosure. The foregoing description shall be interpreted as illustrative and not in any limiting sense. A worker of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For these reasons, the following claim should be studied to determine the true scope and content of this disclosure. | 13,127 |
11858561 | DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INVENTION This description of a preferred embodiment of the invention is intended to be read in connection with the accompanying drawings, which are to be considered part of the entire written description of this invention. The drawing figures are not necessarily to scale, and certain features of the invention may be shown exaggerated in scale or in somewhat schematic form in the interest of clarity and conciseness. As shown inFIG.1, a conventional self-propelled material transfer vehicle11includes a frame12that is supported on the roadway surface by front and rear ground-engaging drive assemblies including right front drive wheel14and right rear drive wheel16. Material transfer vehicle11also includes a left front drive wheel (not shown but substantially similar to right front drive wheel14) and a left rear drive wheel (not shown but substantially similar to right rear drive wheel16). Each of the drive wheels is driven by a hydraulic motor (not shown) that is supplied with fluid under pressure by one or more hydraulic pumps (also not shown). In the alternative, the frame of the vehicle may be supported on the roadway surface by ground-engaging drive assemblies comprising one or more left side track-drive assemblies (not shown), and one or more right side track-drive assemblies (also not shown), as is known to those having ordinary skill in the art to which the invention relates. Vehicle11includes an asphalt paving material receiving device comprising a truck-receiving hopper18. Truck-receiving hopper18is adapted to receive asphalt paving material from a delivery truck (not shown). In the alternative, vehicle11could be equipped with an asphalt paving material receiving device comprising a windrow pick-up head (not shown). An auger (not shown) is mounted in truck-receiving hopper18and is adapted to assist in conveying asphalt paving material from truck-receiving hopper18into loading conveyor20, which in turn conveys the asphalt paving material off of its output end22and into surge bin24. The surge bin includes transverse auger26that is employed to mix the asphalt paving material in the surge bin in order to minimize segregation or separation of the aggregate portion of the asphalt paving material by size. Also located in the surge bin is surge conveyor28, which is adapted to convey asphalt paving material upwardly out of the surge bin so that it may fall through chute30which is located over input end32of discharge conveyor34. Asphalt paving material conveyed out of the surge bin by surge conveyor28falls through chute30and onto input end32of discharge conveyor34. Discharge conveyor34is mounted for vertical pivotal movement about a substantially horizontal pivot axis at its input end that is perpendicular to the page ofFIG.1, as raised and lowered by a linear actuator (not shown). Discharge conveyor34is also adapted for side-to-side movement about a substantially vertical axis by operation of one or more additional actuators (also not shown). Asphalt paving material that falls through chute30onto discharge conveyor34is discharged through chute36at conveyor output end38into an asphalt receiving hopper of an asphalt paving machine (not shown). Hydraulic drive systems including hydraulic pumps and hydraulic motors are provided to drive the various augers and conveyors. An engine (not shown) is located within engine compartment40adjacent to operator's station42and provides the motive force for the hydraulic pumps that drive the hydraulic motors for the drive wheels, the augers and the various conveyors and other components of the vehicle. Operator's station42includes an operator's seat44that is mounted on pedestal46so that the operator may turn the seat between a position that allows the operator to face forwardly and a position that allows him to face rearwardly. In many such material transfer vehicles, two operator's stations are provided, one on the left side of the vehicle and another on the right side. This allows an operator to move to the side of the machine that provides the best view, depending on the side of the roadway on which the machine is operating. In some embodiments of this material transfer vehicle, operator's seat44is fixed in place within operator's station42. In other embodiments, operator's seat44may slide transversely by a limited amount; however, any such transverse movement is constrained by rail assembly48so that the operator's seat cannot extend outside the outer periphery of the material transfer vehicle. Consequently, the operator's view from either side of the machine is partially obstructed, regardless of the position of operator's seat44. FIG.2illustrates an alternative conventional material transfer vehicle50which includes a frame that is supported on the roadway surface by front and rear ground-engaging drive assemblies comprising left front drive wheel52and left rear drive wheel54. Material transfer vehicle50also includes right front drive wheel55and a right rear drive wheel (not shown but substantially similar to left rear drive wheel54). Each of the drive wheels is driven by a hydraulic motor (not shown) that is supplied with fluid under pressure by one or more hydraulic pumps (also not shown). In the alternative, the frame of the vehicle may be supported on the roadway surface by ground-engaging drive assemblies comprising one or more left side track-drive assemblies (not shown), and one or more right side track-drive assemblies (also not shown). Vehicle50includes an asphalt paving material receiving device comprising a truck-receiving hopper56. Truck-receiving hopper56is adapted to receive asphalt paving material from a delivery truck (not shown). In the alternative, vehicle50could be equipped with an asphalt paving material receiving device comprising a windrow pick-up head (not shown). Auger58in truck-receiving hopper56is adapted to urge asphalt paving material into loading conveyor60. Loading conveyor60is operatively attached to the truck-receiving hopper and is adapted to convey asphalt paving material from truck-receiving hopper56upwardly to its output end62, from which it will fall through chute64onto the lower input end of a discharge conveyor (not shown, but substantially similar to discharge conveyor34). Material transfer vehicle50also includes operator's station66from which all operating functions of the vehicle may be controlled via control panel68. Operator's station66includes left operator's seat70, which is adapted to rotate on pedestal72, and right operator's seat74, which is adapted to rotate on pedestal76. Control panel68is mounted on pedestal78and is adapted to rotate between a left-facing position and a right-facing position. However, since operator's station66is fixed within the outer periphery of the material transfer vehicle, and since the pedestals72and76do not move laterally, the operator's view from either side of the vehicle is nevertheless partially obstructed. Material transfer vehicle50includes various hydraulic pumps and hydraulic motors, which are provided to drive the various augers and conveyors. An engine (not shown, but located in engine compartment80) provides the motive force for the hydraulic pumps that drive the hydraulic motors for the drive wheels, the augers and conveyors and other components of the vehicle. Material transfer vehicle50has a longitudinal centerline “C” and a width “W” of operator's station66that is measured transverse to the longitudinal centerline “C”. Thus, width “W” defines the extent of the “outer periphery” of the vehicle. In other words, the outer periphery on one side of the vehicle is measured to be (0.5)W from the centerline. FIGS.3-10illustrate a preferred embodiment of the invention. As shown therein, material transfer vehicle100includes a frame102that is supported on the roadway surface by front and rear ground-engaging drive assemblies comprising right front drive wheel104and right rear drive wheel106. Material transfer vehicle100also includes a left front drive wheel (not shown, but substantially similar to right front drive wheel104) and a left rear drive wheel (not shown, but substantially similar to right rear drive wheel106). Each of the drive wheels is driven by one or more conventional hydraulic motors (not shown) that are supplied with fluid under pressure by one or more conventional hydraulic pumps (also not shown). In the alternative, the frame of the vehicle may be supported on the roadway surface by ground-engaging drive assemblies comprising left and right side track-drive assemblies (not shown). Material transfer vehicle100includes an asphalt paving material receiving device comprising a truck-receiving hopper108. Truck-receiving hopper108is adapted to receive asphalt paving material from a delivery truck (not shown). In the alternative, vehicle100could be equipped with an asphalt paving material receiving device comprising a windrow pick-up head (not shown). An auger (not shown) is mounted in the truck-receiving hopper and is adapted to assist in conveying asphalt paving material from the truck-receiving hopper into loading conveyor110, which in turn conveys the asphalt paving material off its output end112and into surge bin114. The surge bin includes transverse auger116that is employed to mix the asphalt paving material in the surge bin in order to minimize segregation or separation of the aggregate portion of the asphalt paving material by size. Also located in the surge bin is surge conveyor118, which is adapted to convey asphalt paving material upwardly out of the surge bin so that it may fall through chute120which is located over the input end of discharge conveyor122. Discharge conveyor122is mounted for vertical pivotal movement about a substantially horizontal pivot axis at its input end that is perpendicular to the plane of the page ofFIG.3, as raised and lowered by linear actuator124. Discharge conveyor122is also adapted for side-to-side movement about a substantially vertical axis by operation of one or more additional actuators (also not shown). Asphalt paving material that falls through chute120onto discharge conveyor122is discharged through chute126at conveyor output end128into an asphalt receiving hopper of an asphalt paving machine (not shown). Hydraulic drive systems including hydraulic pumps and hydraulic motors are provided to drive the various augers and conveyors. An engine (not shown but contained within engine compartment130) provides the motive force for the hydraulic pumps that drive the hydraulic motors for the drive wheels, the augers and the various conveyors and other components of the vehicle. Operator's station132is accessible by means of ladder134, and includes right operator's platform136and a left operator's platform that is a mirror image of right operator's platform136. Right operator's platform136includes right operator's seat138, which is mounted for rotational movement about right rotational axis A138on right seat base140(best shown inFIGS.5and6), first control panel142and second control panel144. Left operator's platform also contains an operator's seat (not shown, but substantially identical to right operator's seat138) that is mounted for rotational movement about a left rotational axis, a first control panel (not shown, but substantially identical to first control panel142) and a second control panel (not shown, but substantially identical to second control panel144). Both right operator's platform136and the left operator's platform are moveable between a travel position that is entirely within the outer periphery of material transfer vehicle100and an operating position that locates the operator's seat at least partially (and preferably substantially) outside the outer periphery of the vehicle. With reference toFIG.2, it can be appreciated that material transfer vehicle100has a longitudinal centerline (not shown, but substantially the same as longitudinal centerline “C” of material transfer vehicle50) and a width (also not shown, but substantially similar to width “W” of material transfer vehicle50), which width is measured transverse to the longitudinal centerline. Thus, the width of material transfer vehicle100defines the extent of the “outer periphery”. Furthermore, the outer periphery on one side of the vehicle is measured to be one-half of the width from the centerline. Consequently, movement of right operator's platform136to the operating position according to the invention locates right operator's seat138a distance that is greater than one-half of the width of the vehicle to the right from the centerline. More particularly, movement of right operator's platform136to the operating position locates right rotational axis A138a distance from the centerline that is greater than one-half of the width. Similarly, movement of the left operator's platform to the operating position according to the invention locates the left operator's seat a distance that is greater than one-half of the width of the vehicle to the left from the centerline, and more particularly, it locates the left rotational axis a distance from the centerline that is greater than one-half of the width. FIGS.4-10provide a detailed view of right operator's platform136. Right operator's platform136is mounted in operator's station132by means of a right platform moving assembly including slewing bearing146. As shown inFIGS.7and10, linear actuator148is attached between bracket150on the bottom of the slewing bearing and fixed point152on the bottom of operator's station132. In the embodiment of the invention shown in the drawings, linear actuator148comprises a hydraulic or pneumatic cylinder having a rod154that may be moved between the extended position shown inFIG.7, which locates right operator's platform136in the travel position that is entirely within the outer periphery of the material transfer vehicle100, and the retracted position shown inFIG.10, which locates right operator's platform136in the operating position that places right operator's seat138substantially outside the outer periphery of the material transfer vehicle. Extension and retraction of rod154moves bracket150to rotate slewing bearing146about substantially vertical axis156(that is perpendicular to the plane of the pages on whichFIGS.7and10are shown). The invention thus provides a material transfer vehicle, such as vehicle100, which is provided with an operator's station comprising a right operator's platform136which is moveable between a travel position (shown inFIGS.4and7) that is entirely within the outer periphery of the material transfer vehicle and an operating position (shown inFIGS.8-10) that locates right operator's seat138substantially outside the periphery of the material transfer vehicle. In this embodiment of the invention, right operator's station platform136is mounted on a platform moving assembly comprising slewing bearing146, which is moveable by the action of linear actuator148. A similar arrangement is provided to move the left operator's platform between a travel position that is entirely within the outer periphery of the material transfer vehicle and an operating position that locates the left operator's seat substantially outside the periphery of the material transfer vehicle. Although this description contains many specifics, these should not be construed as limiting the scope of the invention but as merely providing illustrations of the presently preferred embodiment thereof, as well as the best mode contemplated by the inventor of carrying out the invention. The invention, as described herein, is susceptible to various modifications and adaptations, as would be understood by those having ordinary skill in the art to which the invention relates. | 15,818 |
11858562 | DETAILED DESCRIPTION Examples of the subject disclosure relate to devices and systems configured to improve vehicle aerodynamics by reducing drag, and thus improving fuel efficiency. Reference will now be made in detail to examples of the present disclosure described above and illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Referring toFIG.1, a transport vehicle, sometimes referred to as a land vehicle20, and in particular here shown as a semi-tractor-trailer truck (i.e., a semi or semi-tractor), may include a cab unit22and a trailer unit24. The cab unit22(sometimes alternatively referred to as a tractor unit22) may be a tractor-trailer type cab unit, which may be powered by a diesel engine, electric engine, hybrid engine, or any other power source. The cab unit22typically includes a passenger compartment26positioned atop a cab frame28that includes a plurality of wheels30rotatably coupled to the cab frame28and positioned along the outer periphery of the cab frame28. The trailer unit24(sometimes alternatively referred to as a semi-trailer unit24) may be any appropriate trailer known in the trucking industry and may be integral with the cab unit22or separately coupled to the cab unit22(such as shown inFIG.1). The cab unit22may also include any appropriate coupling to and electrical connection with the trailer unit24such that electrical signals or other types of signals may be transmitted between the cab unit22and the trailer unit24. The trailer unit24includes a container, shown in the Figures as a generally box-shaped container40that is positioned on a trailer frame42that includes a plurality of wheels44rotatably coupled to the trailer frame42and positioned along the outer periphery of the trailer frame42. In the embodiments shown, the box-shaped container40includes a pair of side walls50, a top wall52, a front wall54, a rear wall56, and a bottom wall58that collectively define an interior storage compartment60that is used to store items for transport. As illustrated, the front wall54is positioned adjacent to the cab unit22when the trailer unit24is coupled to the cab unit22, and in the coupled position is positioned between the passenger compartment26and the rear wall56. At least a portion of one of the walls50,52,54,56,58includes at least one door or access feature that allows for access to the interior storage compartment60. InFIG.1, a roll-up door62is provided within a portion of the rear wall56to allow access to the interior storage department60, although in alternative embodiments different types of doors could be provided within a portion of the rear wall, such as side-by-side doors. Still further, additional doors or access features could be provided in one of the side walls50, or in the top wall52, or front wall54, or bottom wall58(in addition to or replacing the roll-up door62) to provide access to the interior storage compartment. As also shown inFIG.1, the land vehicle20includes one or more drag reduction devices100that are positioned along the trailer unit24, and in particular in the exemplary embodiments provided herein are positioned near the edge of the trailer unit24at the intersection of the top wall52and rear wall56. In alternative embodiments (not shown), such drag reduction devices100may be provided near the edge of the trailer unit at the intersection of one of the side walls50and the top wall52, or the front wall54, or the bottom wall58. The drag reduction devices100, in accordance with each of the exemplary embodiment provided herein, include one or more fan assemblies110each contained within a respective housing102. The fan assemblies110provided herein all include, in general, cross-flow fans (i.e., tangential fans) and air foils that are configured to adjust the movement of air over and around the trailer unit24as the land vehicle20is being driven along a surface during normal use. Accordingly, the one or more drag reduction devices100described in the representative embodiments herein provide reduced resistance from air friction and pressure friction, and thereby contribute significantly to reduced fuel or other energy consumption as the land vehicle20is being driven along a surface during normal use. The present disclosure provides one or more drag reduction devices100, in certain embodiments such as provided inFIGS.2-16, that are coupled to, and extend outwardly from, the exterior of the trailer unit24at a desired location. In other exemplary embodiments, as illustrated inFIGS.19and20, the one or more drag reduction devices100are located internally to a portion of the trailer unit24, as will be described in further detail below. The representative embodiments of the drag reduction devices100as illustrated are conceptual in nature and are not intended to be limited to the embodiments as illustrated. Referring now toFIGS.2and3, one exemplary embodiment of a pair of drag reduction devices100a,100bcoupled externally to the trailer unit24of the land vehicle20is provided. In particular, each one of the pair of land reduction devices100a,100bincludes a fan assembly110contained within a respective housing102. Each housing102has an inner housing portion103which is respectively coupled to, and extend away from, an exterior surface66of a top portion68of the rear wall56adjacent to an edge70defining the intersection between the rear wall56and the top wall52. Each housing102also includes a pair of opposing side housing portions104a,104bcoupled to, and extending transverse from, the inner housing portion103and from a lower housing portion126, with the lower housing portion126also coupled to, and extending transverse from, the inner housing portion103. Still further, each housing102also includes an outer housing portion105coupled to, and extending transverse from, each of the side housing portions104a,104band lower housing portion126and spaced from the inner housing portion103. Further, the housing102also defines a first opening111, or air intake opening111, contained within a top edge103aof the inner housing portion103, a top edge105bof the of the outer housing portion105, and an opposing top portion of the pair of opposing side housing portions104a,104b. The housing102also defines a second opening115, or air outlet opening115, contained between the lower edge105aof the outer housing portion105and the lower housing portion126(sometimes referred to hereinafter as lower surface126) and between the pair of opposing side housing portions104a,104b. A mesh screen130may optionally be seated onto the surface of the top edge103aof the inner housing portion103, the top edge105bof the of the outer housing portion105, and the opposing top portion of the pair of opposing side housing portions104a,104bcorresponding to the air intake opening111to partially cover the air intake opening111and protect the fan assembly100from debris entering as air is drawn into the air intake opening111when the land vehicle20is being driven. Each of the fan assemblies100a,100bincludes a plurality of fan blades122extending radially outwardly from a central rotatable shaft120defining an axis of rotation AR. A pair of opposing end cover members116a,116bmay be coupled to the rotatable shaft120that are positioned respectively between and spaced from one of the pair of opposing side housing portions104a,104b. The pair of end cover members116a,116bprovide the coupling points at either end for each of the fan blades122. As illustrated inFIG.3, a first end cover member116ais positioned between a first side housing portion104aand the fan blades122, while a second end cover member116bis positioned between and spaced from a second side housing portion104band the fan blades122, in each of the drag reduction devices100a,100b. The shaft120is rotatably supported at either end by the opposing side housing portions104a,104b. In general, the fan blades122are provided with a desired shape, extension length and pitch angle PA that are collectively configured to maximize the relative amount of air (shown by arrow A1in one exemplary embodiment inFIG.5) being drawn into the air input opening111during operation of the land vehicle20at the particular vehicle speed, with the air causing the fan blades122, shaft120and end cover members116a,116bto rotate about the axis of rotation AR in a rotational direction (clockwise or counterclockwise about the axis of rotation AR) as the air (shown by arrow A2in the one exemplary embodiment inFIG.5) moves around and in the fan assembly100within the housing102. In the embodiments shown below, the fan blades122are curved in shape, as will be described in detail below with respect toFIGS.4-7. However, in other embodiments, the fan blades122may be flat (i.e., not curved). As noted above, each of the fan blades122each have a known extension length, measured from an inner radial end122cto an outer radial end122d(seeFIG.6), and the same pitch angle PA. The pitch angle PA refers to the angular measurement between a normal line NL (i.e., a line extending normal to the outer surface of the shaft120) and a tangent line TL (i.e., a line drawn from the inner radial end122cand the outer radial end122dof one fan blade122). The pitch angle PA can vary anywhere between and including 0 and 90 degrees, more preferably between 30 and 60 degrees. In alternative embodiments, as opposed to being the same pitch angle, the pitch angle PA of the fan blades122may be variable. Each one of the land reduction devices100a,100balso includes one or more air foils112that are coupled to, and extend between, each of the pair of opposing side housing portions104a,104b. The one or more air foils112are spaced from the respective fan assembly100a,100bwithin the respective housing102a,102band also extend at least partially within the second opening115. The one or more air foils112function to redirect the flow of air exiting through the second opening115at a controlled outflow angle. The air foils112can be thin flat plates or can have a predefined outer profile, such as the curved outer profile shown below in the exemplary embodiments ofFIGS.3-13and16-17, as described below. The number, relative positioning, and shape of the one or more air foils112contained within the housing102, and working in conjunction with the shape and size of the housing102, are collectively configured to redirect the air flow exiting out the air outlet opening115(shown by arrow A3in the one exemplary embodiment inFIG.3) at a controlled outflow angle during operation of the land vehicle20at the particular vehicle speed. In this regard, the number, relative positioning, and shape of the one or more air foils112works in conjunction with the fan blades122having the desired shape, extension length and pitch angle PA as described above to maximize air flow through the fan assembly100as the land vehicle20is driven at the particular vehicle speed (particularly a particular vehicle speed in a forward direction) to provide the a controlled outflow angle that maximizes the drag reduction of the land vehicle20at that particular vehicle speed. Accordingly, as air flows into the air intake opening111and through the fan assembly100aor100bto the air outlet opening115(such as when the transport vehicle20is being driven), the fan blades122, end cover members116a,116b, and shaft120rotate in coordination about the axis of rotation AR in response and relative to the stationary housing102a,102b. Further, the air exiting the housing102a,102bis redirected within the air outlet opening115to the controlled outflow angle upon exiting by the one or more air foils112partially contained within the air outlet opening115. As best illustrated inFIGS.2and3, each of the drag reduction devices100a,100boptionally includes a motor housing114housing a motor (shown in phantom as125inFIG.2) that is respectively coupled to a corresponding one of the rotatable shafts120. The motor125is preferably an electric motor that is electrically coupled to a controller119and battery121via a connecting wire117. The battery121is preferably solar charged and is a standalone battery utilized exclusively for the drag reduction device or devices100,100a,110b, although in alternative embodiments could the same battery utilized to power the components of the cab unit22or trailer unit24of the transport vehicle20or could otherwise be electrically connected with a charger/alternator contained on the land vehicle20. A bearing housing133houses a bearing135which rotatably supports the shaft120extending from the motor housing114to the stationary housing102,102a. Accordingly, when actuated by the controller119, the motor125can rotate the shaft120about the axis of rotation AR relative to the stationary housing102a,102b, which in turn also rotates the fan blades122and end cover members116a,116bin conjunction therewith. This motor125rotation can be utilized to adjust the rotational speed of the shaft120that naturally occurs due to air flowing through the fan assembly100,100a,100bas the land vehicle is being driven at a particular speed so as to maintain the airflow flowing through the fan assembly100,100a,100bin a manner to minimize the amount of drag on the land vehicle20(i.e., maximize drag reduction). In this regard, the controller119may be coupled to one or more sensors (not shown—(such as a speedometer on the land vehicle20, a temperature gauge, one or more wind measurement gauges located on the cab unit22or trailer unit24, etc.) in the land vehicle20that measures a particular vehicle parameter (such as vehicle speed, wind shear etc.), with the controller119including a processor (not shown) having an algorithm that determines the optimal rotational speed and rotational direction of the shaft120when the land vehicle is being driven at a particular speed and has particular measured vehicle parameters and directs the motor125to adjust the rotational speed and direction of rotational direction in response to maximize air flow through the fan assembly100,100a,100band maximize drag reduction. Referring next toFIGS.4-14, multiple exemplary embodiments are illustrated in which the design of one or more of the fan assembly110; housing102a,102b; and/or the air foil112is varied on the exterior mounted drag reduction devices100,100a,100b. Where appropriate, similar or corresponding portions or components of the drag reduction devices100in each of the exemplary embodiments ofFIGS.4-14, that have similar functions or purpose to corresponding portions or components of the drag reduction devices100a,100bofFIGS.1and2, have been identified with like reference numerals (i.e. the fan blades are identified by reference numeral122in each of the embodiments), even where such portions or components have a slightly different shape, have been identified with like reference numerals, for ease of description. Referring first toFIGS.4-7, one exemplary embodiment of a portion of one of the pair of drag reduction devices100that could be utilized in the embodiment ofFIGS.1and2is provided. In the embodiment ofFIGS.4-7, the plurality of fan blades122are curved in shape, and thus include a convex first surface122aand a concave second surface122b. In these embodiments, the convex first surface122ais configured to receive air (shown by arrow A1) being drawn into the air input opening111during operation, with the air flowing through the fan blades122, as the fan blades122, shaft120and end cover members116a,116brotate about the axis of rotation AR in a first rotational direction R1(shown as clockwise inFIG.4) as the air (shown by arrow A2) moves around and in the fan assembly100within the housing102. The air then exits through the three air foils112a,112b,112cand out the air outlet opening115(shown by arrow A3). The curvature of the convex first surface122a, and the corresponding opposing curvature of the concave second surface122b, is designed in a manner that provides a maximum airflow (i.e., increase the draw of air A1being drawn into the air input opening111and correspondingly increase the exit of air A3out the air outlet opening115at a predetermined land vehicle20speed). As also illustrated inFIGS.4-7, the drag reduction device100include three air foils112a,112b,112bhaving a similar outer profile and coupled in a stacked arrangement. As shown inFIGS.5-7, the outer profile of each of the air foils112a,112b,112bincludes a curved inner surface212that serves to smoothly deflect the air flow A2exiting from the fan blades122. The curved inner surface212transitions into an upper surface213and lower surface215that are angled towards one another and collectively terminate into an outer termination edge214. The upper surface213and lower surface215between an adjacent pair of the air foils112a,112b,112cis separated by a gap g1that is predefined (i.e., there is a predefined distance corresponding to the gap g1between the respective air foils112aand112b, there is a predefined distance corresponding to the gap g1between the respective air foils112band112c). The stacked arrangement, as illustrated inFIGS.4-7, refers to an arrangement wherein distance between the curved inner surface212of each of the air foils112a,112b,112b(as well as the outer termination edge214) are equally spaced from the inner housing portion103(and are thus vertically stacked relative to one another as shown inFIGS.3-6). In alternative embodiments (not shown), the distance may be unequally spaced. In the embodiment ofFIGS.4-7, each of the air foils112a,112b,112bis pivotally connected to the pair of opposing side housing portions104a,104babout pivoting points AR2. Further, while not shown, each of the air foils112a,112b,112bmay also connected to the controller119, which controls the movement of the air foils112a,112b,112bbetween a non-clocked position (FIG.6) and a clocked position (FIG.7), and any point in between. Depending upon the combination of vehicle parameters sensed by the sensors of the land vehicle20at a determined (such as temperature, wind shear etc.) and sent to the controller119, the controller119can determine an optimum position of the air foils112a,112b,112bto provide the least drag, and pivot the air foils to any position between and including the non-clocked position (FIG.5) and the clocked position (FIG.7) to provide the least drag on the land vehicle20at the particular vehicle speed. Referring now toFIG.8, another alternative embodiment of the drag reduction devices100that could be utilized in the embodiment ofFIGS.1and2is provided. In this embodiment, each of the air foils112a,112b,112bare provided in the same stacked arrangement and with the same pivotal coupling as the embodiment ofFIGS.3-6, but wherein the spacing of each of the air foils112a,112b,112bis spaced further from the inner housing portion103by an additional distance Z1. By virtue of this increased distance Z1, the air flow A3extending through the gap g1in the non clocked position and exiting through the air outlet115is slightly different than the air flow A3in the embodiment ofFIGS.4-7in the non-clocked position (as shown inFIG.6). Accordingly, the associated drag of the land vehicle20in the embodiment ofFIG.8at a given vehicle speed, under the same vehicle parameters as sensed by the sensors and sent to the controller119, is slightly different than the embodiment ofFIGS.4-7, which may be desirable depending upon other parameters for the land vehicle20on which it is used. Referring now toFIG.9, another alternative embodiment of the drag reduction devices100that could be utilized in the embodiment ofFIGS.1-3is provided. In this embodiment, the curvature of the fan blades222is the opposite of the curvature of the fan blades122inFIG.3-6. In particular, as shown inFIG.9, the plurality of fan blades222also include a convex first surface222aand a concave second surface222b. However, inFIG.9, the concave second surface222bis configured to receive air (shown by arrow A11) being drawn into the air input opening111during operation, with the air flowing through the fan blades222as the fan blades222, shaft120and end cover members116a,116brotate about the axis of rotation AR in a first rotational direction R1(shown as clockwise inFIG.8) as the air (shown by arrow A12) moves around and in the fan assembly100within the housing102. The air then exits through the three air foils112a,112b,112cand out the air outlet opening115(shown by arrow A13). In the embodiment shown inFIG.9, and similar to the embodiments ofFIGS.3-6, each of the air foils112a,112b,112bare provided in the same stacked arrangement and with the same pivotal coupling and with the same spacing as the embodiment ofFIGS.4-7. In the embodiment illustrated, the air foils112a,112b,112bare provided in the clocked position, similar to the embodiment ofFIG.7above, but are also moveable to the non-clocked position similar toFIG.6Accordingly, the amount of drag at a given vehicle speed and vehicle parameters may thus provide a different drag as compared to the land vehicle20including the fan assembly100as inFIGS.4-7by virtue of the altered fan blade configuration. Of course, in other alternative embodiments, the air foils112a,112b,112bcould also be spaced in a manner similar to that inFIG.8above. Still other alternative embodiments of the drag reduction device100in accordance with subject disclosure are provided inFIGS.10and11, in which the number of air foils112is different in number than the three air foils112a,112b,112cas provided inFIGS.4-7, but wherein the design of the fan assembly100is otherwise the same. InFIG.10, a single air foil112ais included, whereas inFIG.11two air foils112a,112bare included. As also illustrated inFIG.10, the positioning of the single air foil112ais shown in a position not centered relative to the air outlet opening115between the lower surface126and the lower edge105aof the outer housing portion105, but in other embodiments may be centered relative to the air outlet opening115between the lower surface126and the lower edge105aof the outer housing portion105. As also illustrated inFIG.11, the positioning of the pair of air foils112a,112bis shown in a position centered relative to the air outlet opening115between the lower surface126and the lower edge105aof the outer housing portion105. Referring next toFIGS.12and13, in yet another alternative embodiment of the drag reduction device100, the one or more air foils (shown inFIGS.12and13as one air foil) is a plasma-controlled air foil312. The plasma-controlled air foil312generates high voltage pulses along the outer surface of the respective air foil312that can generate plasma fields in proximity to the air foils312to alter the flow of air passing in close proximity thereto, which can further enhance the ability of the air foil312to control the outflow angle AR3(seeFIG.13) that maximizes the drag reduction of the land vehicle20. The plasma-controlled air foil312includes a base substrate material314that corresponds to the material utilized to form the air foils112ofFIGS.4-11. The air foil312includes an embedded electrode316and a surface electrode320coupled to an AC voltage source322(or DC voltage source), which is electrically coupled to the controller119. A dielectric material318is disposed between the embedded electrode316and surface electrode320. In operation, as the fan blades122are rotating at the desired speed corresponding to the land vehicle speed and other vehicle operating parameters, the controller119can direct the AC voltage source322(or DC voltage source) to generate a high voltage pulse through each of the embedded electrode316and the surface electrode320on the surface of the air foil312and a plasma field is generated in proximity to the surface of the air foil312. The plasma field acts on the air flowing in proximity to the air foil312to generate an induced air flow (InAF—seeFIG.13), which can further enhance the ability of the air foil312to control the outflow angle AR3(seeFIG.13) that maximizes the drag reduction of the land vehicle20. In yet another alternative embodiment of the drag reduction device100, as illustrated inFIG.14, the mesh screen130is a plasma actuated mesh screen130athat provides a positive charge for the air flow A1that is entering through the air inlet opening111while still assisting in the prevention of FOD (“Foreign Object Damage”) ejection during operation of the land vehicle20. To provide the plasma actuation, the mesh screen130is electrically connected to a positive electrode via a wire (not shown) and may also be connected to the controller119to generate a plasma field in proximity to the mesh screen130. Still further, the surfaces450of the housing102that defines the air outlet opening115, and/or the lower surface126of the housing102, and including optionally portions of the one or more air foils102, would be negatively charged in a manner similar to the charge created on the air foil312inFIG.13(i.e., wherein an embedded electrode and a surface electrode similar to the embedded electrode316and a surface electrode320ofFIG.13are coupled to an AC voltage source (or DC voltage source) which is electrically coupled to the controller119) and thus generates an induced air flow InAF within the air flow A3exiting through the air outlet opening115. In alternative embodiments (not shown), these surfaces could be positively charged. In the design ofFIG.14, the design of the housing102, fan blades122and/or the air foils112and motor125may be as described in any one of the embodiments ofFIGS.4-13. In yet another alternative embodiment of the drag reduction device100, as illustrated inFIG.15, the fan blades122are a plasma actuated fan blades122P that generate plasma fields in proximity to the fan blades122and thus provides a positive charge for the air flow A1that is entering through the air inlet opening111and to the air flow A2that is progressing around the shaft120. Non limiting examples of providing the positive charge include a slip ring (not shown) coupled to the fan blades122or wherein the bearing coupled to shaft is provided with the positive charge. Still further, the surfaces450of the housing102that defines the air outlet opening115, and/or the lower surface126of the housing102, including optionally portions of the one or more air foils102, would be negatively charged in a manner similar to the charge created on the air foil312inFIG.13and as described above with respect toFIG.14, and thus generates an induced air flow InAF within the air flow A3exiting through the air outlet opening115. In alternative embodiments (not shown), these surfaces could be positively charged. Referring next toFIG.16, another exemplary embodiment of a pair of drag reduction devices100a,100bis provided, in which a single rotating shaft120A is utilized to couple together the pair of drag reduction devices100a,100band allow simultaneous rotation of the distinct pair of fan blades122, as opposed to individual rotating shafts120associated with each of the pair of drag reduction devices100a,100bin the embodiments illustrated inFIGS.2-14. A motor housing114housing the motor125may be included and may be coupled to the single rotating shaft120A at one end, or at both ends (as shown inFIG.16). In any of the embodiments described above inFIGS.1-16, the inclusion of the one or more drag reduction devices100,100a,100bcoupled to the exterior of the trailer unit24of the land vehicle20(represented generically byFIG.18) provided reduced drag as compared with a land vehicle20including the same cab unit22and trailer unit24but without the drag reduction devices at the same vehicle speed and operating conditions. As shown inFIG.17, air flowing over the top of the trailer unit24(represented by arrow AFTU) simply continues to flow beyond the end of the trailer unit24(represented by arrow AFTU′) with a portion of the air flowing in circular swirls (represented by arrows CAF). The presence of the circular swirls CAF adds drag to the land vehicle20, resulting in lower fuel economy, increased battery usage or energy usage, and higher emissions associated with increased fuel usage. However, when the one or more drag reduction devices100,100a,100b, the air exiting the one or more drag reduction devices100,100a,100bflows along air flow path AR3, thus reducing resistance from air friction and pressure friction, and thereby contribute significantly to reduced fuel or other energy consumption as the land vehicle20is being driven along a surface during normal use. Referring next toFIGS.19and20, yet another alternative embodiment of the drag reduction device100are provided. InFIGS.19and20, a pair of drag reduction devices100a,100bare included internally within a portion of the trailer unit24itself, with internal cavities500a,500bcreated within the trailer unit24near the intersection of the top wall52and rear wall56to house each respective one of the fan assemblies110a,110b. In particular, the top wall52and rear wall56each have a pair of cut out portions411a,411band415a,415bthat define a pair of cavities500a,500btherebetween that each house a respective one of the fan assemblies110a,110b. The top wall52may further be defined as including a left, central and right side lateral extension52A,52B and52C, a border extension52D that defines the edge portion of the top wall52and the rear wall56, and a cab extending portion56E, and a lower stepped portion52F. Similarly, the rear wall56may further be defined as including a left, central and right side lateral extension56A,56B and56C, a border extension56D that defines the edge portion to the border extension52D of the top wall52, a downward extending portion56E, and an inward stepped portion56F. The cutout portion411ais defined as the opening between the cab extending portion52E, the left side lateral extension52A, the border extension52D, and the center lateral extension52C. Similarly, the cutout portion411bis defined as the opening between the cab extending portion56E, the right side lateral extension52C, the border extension52D, and the center lateral extension52C. The cutout portion415a, which is open to the cutout portion411a, is defined as the opening between the downward extending portion56E, the left side lateral extension56A, the border extension56D, and the center lateral extension56C. Similarly, the cutout portion415b, which is open to the cutout portion411a, is defined as the opening between the downward extending portion56E, the right side lateral extension56C, the border extension56D, and the center lateral extension56C. The cavity portion500ais the further defined as the area between the cutout portion411a, the lower stepped portion52F, the inward stepped portion56F, the border extension52D, the border extension56D, and the cutout portion415a. Similarly, the cavity portion500bis the further defined as the area between the cutout portion411b, the lower stepped portion52F, the inward stepped portion56F, the border extension52D, the border extension56D and the cutout portion415b. The fan assemblies100a,100b, as noted above, are each positioned within the respective cavities500a,500b. In particular the fan assembly100ais positioned within the first cavity500asuch that the inner housing portion103is adjacent to and supported by the inward stepped portion56F, with the lower portion26positioned adjacent to the lower stepped portion52F, and with the outer housing portion105positioned inwardly from the border extensions52D,56D. The first opening111is aligned with the cutout portion411a, and the second opening115is aligned with the cutout portion415a. The motor housing114is coupled the cavity created between the left side lateral extension52A, the left side lateral extension56A, and the left side wall50. Of course, in alternative embodiments, the motor housing114and motor125of one or both respective fan assembly100aor100bmay be placed between the respective fan assemblies100a,110b. In particular the fan assembly100ais positioned within the first cavity500asuch that the inner housing portion103is adjacent to and supported by the inward stepped portion56F, with the lower portion26positioned adjacent to the lower stepped portion52F, and with the outer housing portion105positioned inwardly from the border extensions52D,56D. The first opening111is aligned with the cutout portion411a, and the second opening115is aligned with the cutout portion415a. The motor housing114is coupled the cavity created between the left side lateral extension52A, the left side lateral extension56A, and the left side wall50. Similarly, the fan assembly100bis positioned within the second cavity500bsuch that the inner housing portion103is adjacent to and supported by the inward stepped portion56F, with the lower portion26positioned adjacent to the lower stepped portion52F, and with the outer housing portion105positioned inwardly from the border extensions52D,56D. The first opening111is aligned with the cutout portion411aof the top wall52(and hence the top wall52partially defines the first opening111), and the second opening115is aligned with the cutout portion415aof the rear wall56(and hence the rear wall52partially defines the second opening115). The motor housing114is coupled the cavity created between the right side lateral extension52B, the left side lateral extension56B, and the right side wall50. Of course, in alternative embodiments, the motor housing114and motor125of one or both respective fan assembly100aor100bmay be placed outwardly of the respective fan assemblies100a,110b. Similar to the embodiments wherein the fan assembly is coupled externally, air is drawn into the fan assembly100a,100bas the land vehicle20is traveling along a surface. The air flow A1enters through the air inlet opening111contained within the cutout portion411a,411b, the air then flows around and in the fan blades122(see air flow A2) and then is propelled between the air foils112(shown as five air foils112inFIG.20) and out the air outlet opening115(see air flow A3) contained within the cutout portion415a,415b. In the embodiment illustrated, the rotatable shaft120, fan blades122, and end portions116a,116brotate in a counterclockwise direction R1′ in response to air flow A1, A2, and A3flows through the fan assemblies100a,100babout the axis of rotation AR (of course in alternative embodiments the rotation may be in a clockwise direction). While the exemplary embodiment inFIGS.19and20shows one particular configuration of fan blades122and air foils112, the subject disclosure contemplates any of the fan blade122and air foil112as provided in the drag reduction devices100a,100bcoupled to an exterior of the trailer unit24as described above with respect toFIGS.1-15. In addition, whileFIGS.19and20show a configuration with rotatable shafts120each having separate rotatable shafts120, it is also contemplated that a single rotatable shaft120can interconnect two sets of fan blades112such as inFIG.16above. Still further, the air foils112and/or the fan blades can also be plasma actuated in a manner similar to the embodiments described inFIGS.14and15above. While the present disclosure and drawings are described in the context of semi- or tractor-trailer-type trucks, it should be appreciated that the presently disclosed devices and systems may be applicable to any moving vehicle, ranging from passenger cars, including SUVs and sedans and buses, to freight trains or locomotives. Moreover, the presently disclosed devices and systems may be applicable to any type of cargo trucks, including RV's, box-type trucks, delivery vans, or the like. Accordingly, as provided herein, the term “land vehicle” as provided herein is specifically intended to encompass moving vehicles and cargo trucks. Further, in embodiments such as passenger cars or SUV's that do not specifically include a distinct cab unit and a trailer unit as described above, the rearward portion of such passenger cars or SUV's can be further defined as the “trailer unit” for the purposes of the present invention. While the invention has been described with reference to the examples above, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted for elements thereof, without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all examples falling within the scope of the appended claims. Any reference to claim elements in the singular, for example, using the articles “a,” “an,” “the” or “said,” is not to be construed as limiting the element to the singular. It will be further appreciated that the terms “include,” “includes,” and “including” have the same meaning as the terms “comprise,” “comprises,” and “comprising.” | 37,199 |
11858563 | DETAILED DESCRIPTION The rear fence panels according to the example embodiments described herein can be configured as static or active structures of a vehicle and offer reduced aerodynamic drag and improved aerodynamic performance. As will be discussed in detail below, the placement of the rear fences, each disposed on a rear panel portion (i.e., a rearward-facing exterior surface) between the rearmost pillar (e.g., the D pillar) and the outboard along the rear windshield and being spaced apart from typical styling surfaces, allow for greater freedom in the selection of an aerodynamic shape and size of the spoiler, including tapered portions. In some embodiments, a rear fence can extend in a substantially downward direction from a top portion near the conventional spoiler to a bottom portion near the lowermost edge of the rear windshield. Furthermore, in a non-limiting example, the rear fences can be configured as active systems that may be deployed while the vehicle is moving at a predetermined speed to improve aerodynamic performance and may be retracted or stowed when the vehicle is parked or operating at low speeds to improve styling appearance. Such a system is particularly useful in cases where a cleaner aesthetic appearance for the vehicle is desired when the vehicle is parked or being driven at low speeds. Referring now toFIG.1, a vehicle100on which example embodiments of rear fence panels (referred to herein as “rear fences” or “rear fence blades”) may be installed is shown. In an example embodiment, vehicle100is a sport utility vehicle (SUV), however, it should be understood that the example embodiments may be used with any type of vehicle having a rearward-facing surface, herein referred to interchangeably as a rear panel. In general, the rear panel can comprise an area extending between the rearmost set of vehicle pillars (e.g., the C pillars in standard sedans and hatchbacks, and the D pillars in station wagons, SUVs, mini-vans, and other multi-purpose vehicles). The rear panel can also include the rear window or rear windshield of the vehicle. Although the term “rear window” or “rear windshield” may be used herein for purposes of convenience when describing the rear panel, it may be understood that the proposed embodiments may also be implemented on vehicles that do not include windows along their rearward-facing surface. As a general matter, each pillar is a vertical or near vertical support structure located at the rearmost portion of the vehicle body behind the rear doors of the vehicle. In contrast, the vehicle's A pillar is located on either side of the vehicle's front windshield, the B pillar is located between the front doors and rear doors, and the C pillar is located directly behind the rear doors. The D pillar, in vehicles including the D pillar, is located further towards the rear of the vehicle than the C pillar. For ease of reference throughout this disclosure, C pillars (in vehicles where there are only A, B, and C pillars), D pillars (in vehicles with A, B, C, and D pillars), and E pillars (in vehicles with A, B, C, D, and E pillars) will be identified more simply as the rearmost (“rear”) pillars of the vehicle. In embodiment ofFIG.1, vehicle100includes a rear pillar102located at the rear of vehicle100behind a quarter glass window104on one side of vehicle100. While not shown in this embodiment, vehicle100also includes a corresponding rear pillar located on the opposite side of vehicle100. In an example embodiment, vehicle100also includes a rear upper spoiler106located above an optional rear window108of vehicle100(represented here in dotted lines). In embodiments in which the proposed devices are implemented in a vehicle without a rear upper spoiler, the reference to an upper spoiler that occur below may be understood to refer to the rearmost edge of the roof of the vehicle (e.g., in the case of an SUV) and/or the uppermost edge of the trunk of the vehicle (e.g., in the case of a sedan). As shown inFIG.1, a pair of rear fences (or simply “fences”)190of the present embodiments are in a stowed or retracted position located directly under the rear upper spoiler106. In an example embodiment, the rear fences190are disposed on either side of a substantially vertical central axis186on a rear panel192of vehicle100, and would also be disposed on either side of a central longitudinal axis running the length of the vehicle through its middle. InFIG.1, rear panel192includes the rear window108disposed or extending between a first side portion172and a second side portion174. In some embodiments, the rear panel192is adjacent to or part of the tailgate portion of the vehicle. Thus, one rear fence is disposed on the first side portion172, and the other rear fence is disposed on the second side portion174of the rear panel192. In one example, the rear fences are positioned between the rear pillar and rear window. In some embodiments, the fences are disposed near or adjacent to a first trailing edge112aand a second trailing edge112b(collectively referred to as trailing edges112) of the rearmost pillars, running along the perimeter of rear window108, and directly adjacent to the rearmost pillar. In other embodiments, they may be spaced further apart from the trailing edge112(i.e., closer to rear window108) of the nearest rear pillar, and disposed further inward or closer toward a central axis, as shown in the figures. In one embodiment, the rear fence is spaced apart at least 20 mm from the neighboring rear pillar. In the stowed configuration, a body (blade) portion, which may also be referred to as a panel portion, of the rear fence is substantially aligned with the plane in which an upper panel edge196of the rear panel192extends, or in this case with a lateral axis. In contrast, in the deployed configuration, the body of the rear fence lies in a plane substantially aligned with the sides of the vehicle, or in this case longitudinal axis290(seeFIG.2), to serve as a lengthening element of the car's length from front to back. It should be understood that while the illustrated embodiments depict the fences being disposed adjacent to the rear pillars, in other embodiments where the rearmost pillars are not directly adjacent to the rear facing surface of the vehicle (e.g., where a rear window ‘wraps’ around one or both of the rear ends of the vehicle) the rear fences can extend directly from the surface of the rear window, or on a structural portion integrated into the window region for supporting the rear fence. In other words, the rearmost pillar is identified primarily for purposes of convenience in describing the approximate location of the proposed fence structure. Similarly, while rear spoilers are predominantly positioned at the top edge of a vehicle roof, in cases where the rear spoiler is lower (e.g., a sedan where the spoiler extends from the lowermost region of the tailgate), the fences can be disposed higher than the rear spoiler. In an example embodiment, air flow through spaces or peripheral regions110formed on either side of the rear of vehicle100between a lower edge of rear upper spoiler106and trailing edges112may cause higher aerodynamic drag for vehicle100. For example, reverse air flow along rear window108while vehicle100is moving may interact with the corner portions of peripheral regions110(i.e., the region where rear upper spoiler106and rear pillar102meet) to create end vortices116of air that reduce the overall effectiveness and/or aerodynamic performance of rear upper spoiler106. In some cases, a vortex may form at each of the two corner ends of the rear upper spoiler, leading to a local drag penalty. In addition, it may be appreciated that the flow of air along this arrangement has a strong lateral-direction component and “pushes out” the body side wake. Referring now toFIG.2, the rear fences of the present embodiments are shown in a deployed position. In this embodiment, rear fences190are shown disposed within peripheral regions110on either side of the rear of vehicle100. More specifically, a first rear fence (“first fence”)202installed or mounted on first side portion172extends downward from an underside of rear upper spoiler106toward a lower panel edge250, and a second rear fence (“second fence”)204installed or mounted on second side portion174extends downward from the underside of the rear upper spoiler106toward lower panel edge250. As a general matter, the rear upper spoiler106extends from a rearmost edge of the roof of the vehicle to a rearmost edge240. Lower panel edge250in this case refers to the edge running in a substantially lateral direction along the lower perimeter of rear window108. Thus, as shown inFIG.2, vehicle100includes two rear fences190, one on each side of vehicle100. Additional details regarding the arrangement of the components of the rear fence assembly will be presented with reference toFIGS.11A and11Bbelow. For clarity, the description makes reference to a set of axes. As a general matter, the term “longitudinal axis” as used throughout this detailed description and in the claims refers to an axis that extends in a longitudinal direction, which is a direction extending the length of a vehicle (i.e., from the front of the vehicle to its rear, as shown with a longitudinal axis290). Similarly, the term “lateral axis” as used throughout this detailed description and in the claims refers to an axis that extends in a lateral direction, which is a direction running a width of the vehicle. In the present case, the direction between the first side182and the second side184is aligned with a lateral axis280. In addition, the term “vertical axis” as used throughout this detailed description and in the claims refers to an axis that extends in a vertical direction, which is a direction running from the floor to the roof structure of a vehicle. In this case, the vertical central axis186is aligned with a vertical axis270. Each axis of the three axes may be understood to be orthogonal relative to the other two axes. Furthermore, the description makes reference to distal and proximal directions (or portions). As used herein, the distal direction is a direction outward or oriented away from a reference component or further from the reference component. Also, the proximal direction is a direction oriented toward a reference component or nearer to the reference component. Thus, a distal side or region refers to a portion of a component that is disposed further from a reference component and a proximal side or region refers to a portion of a component that is disposed nearer to a reference component. In addition, a medial direction or portion refers to a portion of a component that is closer to a middle of the vehicle. In an example embodiment, rear fences190are deployed from the stowed or retracted position substantially flush against the surface of the rear panel192(seeFIG.1) and aligned with lateral axis280to the deployed position shown inFIG.2, where the fences are substantially aligned with longitudinal axis290. For example, in some embodiments, rear fences190are deployed using a deployment mechanism (described below) that is configured to rotate or pivot the rear fences190from the stowed or retracted position against the rear panel192to the deployed position in response to vehicle100reaching a predetermined speed, a specific windspeed, a change in temperature above (or below) a particular threshold value, and/or a manual trigger. As noted above, in some other embodiments, the fences190may alternatively be configured as static structures that are configured to remain in the deployed position. In some embodiments, the height of a rear fence (extending away from the rear panel when deployed) can vary between approximately 50 mm-1500 mm. The height can be adjusted based on the specific vehicle's style, appearance, and observed airflow. In an example embodiment, each rear fence190is substantially continuous or uninterrupted with the underside of rear upper spoiler106on each side when rear fences190are in the deployed position, forming a substantially contained U-shaped area. Inner surfaces of rear fences190(disposed on the sides closer to the rear window108) face inwards towards each other when deployed. Rear fences190can thus serve as an extension of the rear upper spoiler106and each side portion in order to assist with attenuating the airstream vortices (e.g., end vortices116, shown inFIG.1) caused by airflows on rear panel192. For example, as shown inFIG.2, rear airflow210travels outward toward rear upper spoiler106and rear fences190and is directed back downwards in a smooth manner, interacting with body-side air flow. Thus, the streamwise vortex (seeFIG.1) is minimized and rear panel air flow is redirected in the longitudinal direction. In this manner, the static pressure increases on rear panel192(e.g., along rear window108) and act to improve aerodynamic performance overall by reducing aerodynamic drag on vehicle100as it is moving for example at or above a predetermined speed at which the rear fences190are deployed. With this arrangement, rear fences190provide aesthetically pleasing styling under parked and low speed conditions, while also providing improved aerodynamic performance at high speeds (e.g., at or above the predetermined speed, as will be described below). In other words, in some examples, the rear fences can be deployed to modify the airflow from the rear window and redirect it in the rearward direction for improved aerodynamics and lower drag. In some embodiments, the performance metrics from this particular positioning of the rear fence may offer greater aerodynamic benefits than traditional methods under similar constraints. For example, with reference to bothFIGS.1and2, airflow around the rear panel can be redirected through implementation of the proposed embodiments, leading to reduced aerodynamic drag. In one embodiment, the rear fences can contain or compartmentalize high pressure exerted on the rear panel (including the rear window glass) by creating a cavity or pocket encompassed by the rear fence, the rear window, and the upper spoiler. More specifically, airflow on the rear panel can then be redirected from a primarily lateral direction (seeFIG.1) to a rearward longitudinal direction (seeFIG.2) to improve pressure on the rear window and/or other rear panel components. In addition, the proposed assembly is configured to minimize the lateral mixing between the high-speed longitudinal body side flow and the laterally moving rear window flow, which in turn reduces mixing losses and end-region vortex strength (seeFIG.1). Furthermore, an additional benefit is the flexibility provided by such an assembly. For example, the shape or texture of the rear fence's outer surface need not be aerodynamically smooth (unlike a traditional D pillar spoiler, which are limited by feasibility and/or styling constraints). In one example, the rear fence has an outer surface that includes ridges or other aerodynamic texturing. Thus, these rear fence devices can be positioned further inboard relative to traditional D pillar spoilers. In addition, in some embodiments, the proposed embodiments can create or otherwise form a seal with the rear panel. The effects of the rear fence can be changed by modifying variables such as their tolerance to the rear window, the vertical height of the rear fence, the proportion of the rear window along with the rear fence extends, and the angle of the inboard edge of the fence surface. In different embodiments, rear fences190may in the form of a pane, flap, panel, or rigid sheet piece having a shape and dimensions configured to extend the length of the side portion from the upper panel edge196to the lower panel edge250. In some embodiments, a spoiler may have a substantially triangular, rectangular, trapezoid, rhombus, or other quadrilateral shape, as well as other regular or irregular shapes. In different embodiments, the dimensions of the panel may vary, depending on the shape and/or configuration of the rear pillars and rear upper spoiler on the vehicle. In different embodiments, the panels forming rear fences190may be made from a variety of materials, including, but not limited to: solid materials, such as metal, carbon fiber, fiberglass, or rigid plastic, flexible materials, such as fabrics, rubber, or bendable plastics, and/or combinations thereof. In one embodiment, the fences or portions thereof comprise an injection molded plastic. Referring now toFIG.3, a side view of vehicle100with rear fences190in the stowed or retracted position is shown. As shown in this embodiment, in some embodiments, the roof180of vehicle100can have an upper surface302that is continuous with an upper surface304of rear upper spoiler106so as to form an uninterrupted uniform surface on the top of vehicle100. When rear fences190are in the stowed or retracted position on the underside of rear upper spoiler106, peripheral regions110where rear upper spoiler106intersects or meets with the rear panel192and trailing edge112of rear pillar102running along the perimeter of rear window108create end vortices116of air that cause higher aerodynamic drag for vehicle100and reduce the overall effectiveness and/or aerodynamic performance of rear upper spoiler106. Referring now toFIG.4, a side view of vehicle100with rear fences190in a deployed position is shown. In example embodiments, rear fences190are located within the peripheral regions110on either side of the rear of vehicle100between the rear upper spoiler106and lower panel edge250of rear panel192, where lower panel edge250in this case runs along the lower perimeter of rear window108. In this embodiment, a substantially planar or flat body of the rear fence190includes a top edge400that is configured to contact or be disposed adjacent to the underside of rear upper spoiler106. Rear fence190also includes a bottom edge402that is configured to contact or be disposed adjacent to the side portion of the rear panel192. In one embodiment, the bottom edge402is joined to a hinge mechanism (seeFIGS.7and8) disposed directly behind the rear panel192or directly atop the rear panel192. In another embodiment, the bottom edge402is fixedly attached or adhered to the surface of the rear panel192. Rear fence190also includes an outer edge404that extends from the rearmost edge240of rear upper spoiler106towards lower panel edge250. The outer edge404is the edge of the rear fence that is not adjacent to or in contact with another component of the vehicle when the spoiler is deployed. In some embodiments, outer edge404thereby can comprise multiple edges, depending on the shape of the rear fence. With this configuration, top edge400, bottom edge402, and outer edge404of the body of rear fence190form a substantially triangular shape. However, it should be understood that the outer edge404need not be linear, and can include curvature and/or multiple sides, as shown inFIGS.4,9A-9C, and10A-10G. In addition, the proposed embodiments can be implemented even on vehicle surface with substantial curvature along the rear panel. For example, vehicles that include a substantially curved rear windshield can readily include such rear fences. In some cases, the rear fences themselves may be curved to accommodate various desired airflow patterns. In some embodiments, edges of rear fence190may be arranged so as to be flush with the other vehicle body components, including a tip of top edge400being nearly flush with a portion of the rearmost edge240of the underside of rear upper spoiler106and bottom edge402being substantially flush along its respective side portion of rear panel192. In other embodiments, small gaps or spaces may be provided between the edges of rear fence190and the vehicle body components, for example, on the order of several millimeters (e.g., 2-5 mm) to allow for manufacturing tolerances and other margins. As shown inFIG.4, outer edge404of rear fence190is approximately aligned with the rearmost edge240of the curved end portion of rear upper spoiler106. That is, the dimensions of rear fence190are configured so as to extend from the surface of the rear panel192along at least a portion of the underside of rear upper spoiler106. In some embodiments, the rear fence190is substantially parallel to trailing edge112of rear pillar102. In one embodiment, top edge400may have a length at least half as long as the length of the portion of rear upper spoiler106that extends over rear window108(e.g., approximately 300 mm). However, in other embodiments, the dimensions of rear fence190may vary. For example, in some cases, top edge400of rear fence190may extend up to or past a lower end440of the rear upper spoiler106so that top edge400of rear fence190protrudes outward and has a length that is greater than the length shown inFIG.4. It should be understood that the dimensions of rear fence190may scale with the size and dimensions of rear upper spoiler106. In some embodiments, the rear fences according to the example embodiments described herein are deployed while the vehicle is, for example, moving at a predetermined speed to improve aerodynamic performance. Referring now toFIGS.5and6, two rear views of vehicle100including an embodiment of an active spoiler system are shown.FIG.5Aillustrates rear fences190in a retracted or stowed position beneath or under the rear upper spoiler106. For example, rear fences190may be in the retracted or stowed positions when vehicle100is parked or when moving at speeds less than the predetermined speed at which rear fences are to be deployed. In this embodiment, each individual rear fence, including first fence202on first side182of vehicle100and second fence204on the opposite second side184of vehicle100, are folded approximately flat against the first side portion172and second side portion174respectively in their retracted or stowed positions. That is, in the retracted or stowed position, first rear fence202and second rear fence204are arranged underneath rear upper spoiler106such that the inner surfaces (e.g., inner surfaces610and620, shown inFIG.6) of each individual rear fence face the surface of the rear window108, while the outer surfaces510and520face rearward (i.e., in a direction toward the viewer inFIG.5). In other words, the planar body of the rear fences are oriented in a lateral direction when retracted. In some embodiments, rear panel192may include corresponding recesses on either side that are configured to receive each rear fence106in the retracted or stowed position. In an example embodiment, each recess has a shape and size that corresponds and/or conforms to the shape and size of the respective rear fence. With this arrangement, rear fences190, including first rear fence202and second rear fence204, may be hidden or minimally visible when in the retracted or stowed position so as to provide aesthetically pleasing styling under parked and low speed conditions. For example, the recess may have a depth that is substantially similar to a thickness of the rear fence so that rear fence may fit snugly within the recess and provide a substantially smooth outer surface to the rear panel. However, in other embodiments, the rear panel192may not include recesses. For example, as shown in the figures, the rear panel can be substantially smooth and continuous, where each rear fence is disposed against and above the external surface (i.e., providing a layer that rest above the rear panel surface). Next,FIG.6illustrates rear fences190, including first rear fence202and second rear fence204, in their deployed positions on either side of vehicle100. In this embodiment, each of first rear fence202and second rear fence204has been rotated or pivoted outward (away from the rear panel192) by a deployment mechanism (described below) that transitions each rear fence from the stowed position to an upright position so that outer edge404of each rear fence is substantially continuous or uninterrupted with an adjacent rearmost edge240of rear upper spoiler106on each side. In other words, the planar body of the rear fence is now oriented in a longitudinal direction. With this arrangement, rear fences190, including first rear fence202and second rear fence204, provide improved aerodynamic performance to vehicle100in their deployed positions. Referring now toFIGS.7and8, one example of a deployment mechanism700configured to move or transition rear fences190between the retracted or stowed position and the deployed position is shown. In an example embodiment, each individual spoiler of rear fences190may be associated with a separate deployment mechanism700that is configured to rotate or pivot the spoiler between the retracted or stowed position and the deployed position. In other embodiments, both rear fences190on each side of vehicle100may be deployed and/or retracted using a single deployment mechanism. For example, a single deployment mechanism may be connected to both rear fences using linkages and other mechanisms to deploy and/or retract both rear fences in unison. In different embodiments, deployment mechanism700is located behind or beneath a side portion of the rear panel and/or the rear pillar and arranged with a pivot or rotation axis702that is approximately aligned along the longitudinal direction of vehicle100(e.g., from the front end to the rear end of vehicle100). In some embodiments, pivot or rotation axis702may also be angled in lateral direction, or in vertical direction, or be oriented diagonally relative to the three axes. In some embodiments, the deployment mechanism700is disposed within a compartment or other space formed in the interior of rear panel. In other embodiments, the deployment mechanism700may protrude externally outward from the rear panel. In an example embodiment, deployment mechanism700includes a motor704configured to rotate or turn a linkage706that is connected or attached to rear fences190by one or more support members710. In different embodiments, the system can also include actuation components, such as but not limited to electromagnetic and/or pneumatic actuators. By action of motor704rotating or turning linkage706, rear fences190may be rotated or pivoted between the retracted or stowed position and the deployed position. In this embodiment, support members708include a plurality of members connected or attached to the inner surface610of rear fences190(i.e., on the back side of rear fences190opposite outer surface510, so that support members are not visible when the fences are stowed). Support members710, in this case including three members, are approximately perpendicular to linkage706so as to translate the rotational movement of linkage706from motor704to the pivoting or rotating motion that transitions rear fences190between the retracted or stowed position and the deployed position. Additionally, in some embodiments, support member710can be connected to linkage706at one end so that they can rotate or turn along with linkage706when driven by motor704. In some embodiments, the apparatus described herein may include provisions for remaining in the retracted position until deployment is triggered. For example, inFIG.7, the rear fence includes a magnetic component750embedded or attached to the surface of the rear fence that is configured to help secure the inner surface of the rear fence against the rear panel. The attractive force is strong enough to hold the rear fence against the rear panel during normal operation, and weak enough to freely permit the transition of the rear fence from the retracted position to the deployed position. In other embodiments, support members710may include a larger or smaller number of support members. For example, in some cases, more support members may be used based on the type of material used to form the panel of rear fences190. In addition, in cases where the material used to form the panel of rear fences190is a flexible material (including, for example, fabric), support members608may include a frame or other structure that defines a perimeter of the rear fence190to provide its triangular shape. In another embodiment, there may be no support members, or they may vary in size and placement and orientation along the rear fence surface. InFIG.7, the deployment mechanism700for rotating or pivoting rear fences190is shown with a representative rear fence190in a retracted or stowed position. In this embodiment, rear fence190is shown in the retracted or stowed position such that inner surface610is facing downwards (e.g., towards rear panel, as shown in previous figures). In this embodiment, motor704of deployment mechanism700rotates or turns linkage in a clockwise direction700to cause rear fence to pivot or rotate from the retracted or stowed position to the deployed position. Similarly, reverse motion by motor704drives linkage706in a counter-clockwise direction to cause rear fence to pivot or rotate back from the deployed position to the retracted or stowed position. Referring now toFIG.8, the deployment mechanism700for rotating or pivoting rear fences190is shown with a representative rear fence190in a deployed position. In this embodiment, motor704of deployment mechanism700has rotated or turned linkage in a clockwise direction to cause rear fence to pivot or rotate from the retracted or stowed position to the deployed position shown inFIG.8. In this embodiment, outer surface510of rear fence190is facing away from a central axis of the vehicle, as shown in the previous figures. In one embodiment, motor704rotates or turns linkage706to pivot or rotate rear fence190approximately 90 degrees from the retracted or stowed position to the deployed position. In some cases, rear fence190may be rotated or pivoted more or less than 90 degrees (e.g., in a range between 80-110 degrees) in order to reach and fill peripheral regions between rear upper spoiler106and rear panel. For example, the amount of rotation may depend on the shape and slope of the vehicle body components, including but not limited to the rear panel (including the rear window), rear pillars, and/or rear upper spoiler configurations on any given vehicle. Although an active deployment mechanism is described above, in different embodiments, the system can alternatively employ a semi-passive mechanism in which airspeed state can cause the rear fences to “flip” open and transition from the retracted position to the deployed position. In other words, the shape and orientation of the rear fence can be configured to push the rear fence up when windspeed exceeds a particular threshold value. Furthermore, in some embodiments, a spring retracted system may be used to move the fence between a deployed and stowed configuration. Similarly, in some embodiments, one or more electro magnets may be used to change the orientation of a fence. For purposes of clarity,FIGS.9A-10Hprovide some non-limiting examples of variations of rear fences that may be implemented in the above-described system. InFIG.9A, vehicle100includes a first fence type910comprising a substantially trapezoid shape, extending across most of the length of the rear panel192from top to bottom. Thus, a first end930is disposed directly adjacent or just touching the upper end of the rear panel, while a second end932is disposed directly adjacent or just touching the lower end of the rear panel. InFIG.9B, a second fence type920also extends across most of the length of the rear panel192from top to bottom, but in this case has a substantially rectangular shape. In addition, it can be understood that any of the rear fences may have different lengths, as shown inFIG.9C, where a third fence type930extends only partway (in this case, approximately halfway) down the rear panel192. Thus, in contrast toFIG.9A, a first end940is disposed directly adjacent or just touching the upper end of the rear panel, while a second end942is disposed in an interior portion of the rear panel, spaced apart from the lower end of the rear panel. In order to provide the desired aerodynamic benefits described here, it may be appreciated that the length of the fence should extend at least a third of the maximum distance (i.e., height) from the upper end of the rear panel to the lower end of the rear panel. Several more non-limiting examples of varying fence shapes that may be implemented are shown inFIGS.10A-10G, including regular and irregular shapes. As a general matter the width of the fence decreases as it approaches the lowermost terminus (as shown in the examples ofFIGS.10A-10G), forming a narrowed or tapered end portion, though in other cases the width can remain substantially constant, depending on the aerodynamic flow desired. Referring now toFIGS.11A and11B, additional details regarding embodiments of static configuration1110and active configuration1120are presented by reference to top-down cutaway views of a side portion of the rear panel for each embodiment. As shown in bothFIGS.11A and11B, an end portion of the rear upper spoiler1140is disposed further outboard (distal) relative to a first side portion1190inFIG.11Aand a second side portion1192inFIG.11B. In this case, the side portions include the fence and adjoining rear panel surface such as a panel1160and/or a glass portion1150. In addition, as noted above, the fences are positioned along the side portions (e.g., first side portion1190) of the rear panel of the vehicle. Thus, inFIG.11A, a static rear fence1112is disposed inboard (closer toward a midline of the rear panel) of the end portion of the rear upper spoiler1140. Similarly, inFIG.11B, an active rear fence1170is disposed inboard (closer toward a midline of the rear panel) of the end portion of the rear upper spoiler1140. More specifically, in the static configuration illustrated inFIG.11A, a first anchor portion1114of the static rear fence1112is disposed directly inboard of the end portion of rear upper spoiler1140, and outboard of panel1160and glass1150. Similarly, in the active configuration ofFIG.11B, a second anchor portion1122is disposed directly inboard of the end portion of rear upper spoiler1140, and outboard of the panel1160and glass1150. In different embodiments, the first anchor portion1114and the second anchor portion1122are embedded or integrally formed within their respective side portions. Thus, inFIGS.11A and11B, first anchor portion1114is integrally formed in first side portion1190and second anchor portion1122is integrally formed in second side portion1192. In other embodiments, an anchor portion may be disposed atop or against of a surface of the side portion. In addition, it may be observed the first side portion1190inFIG.11Aand the second side portion1192inFIG.11Bare each disposed rearward of a rearmost pillar1130of the vehicle. In some embodiments, the side portion can correspond to a rearmost surface of the rearmost pillar1130or is disposed directly adjacent to a rearmost surface of the rearmost pillar1130. Furthermore, as shown inFIG.11A, the first anchor portion1114and a protruding first blade portion1116are integrally formed as one piece in the static configuration1110. An angle A1can vary in different embodiments, but in this case may be understood to be approximately 90 degrees. In contrast, in the active configuration1120ofFIG.11B, the active rear fence1170includes a hinge portion1124that connects second anchor portion1122to a second blade portion1126. The hinge portion1124, when the fence is activated, permits rotation of the second blade portion1126from a first position1180(shown in dotted line) to a second position1182, in this case corresponding to a rotation around an angle A2. In different embodiments, the maximum value of angle A2can vary, though in this case it is shown as being around 90 degrees. In addition, in some embodiments, the second blade portion1126may be configured to rotate and maintain a position anywhere between first position1128and second position1182. Additional views illustrating some of the proposed systems are provided with reference toFIGS.12A-16. It should be understood that one or more features discussed with reference toFIGS.1-11may be implemented by the devices depicted inFIGS.12A-16; similarly, one or more features discussed with reference toFIGS.12A-16may be implemented by the devices depicted inFIGS.1-11. InFIG.12, a rear view of vehicle100is shown in which several referential lines generally demarcating regions have been added for purposes of clarity to the reader. The rear fences have been removed fromFIGS.12A and12Bto allow the reader to more clearly distinguish each region. As noted earlier, the rear panel192extends between first trailing edge112aand second trailing edge112bin a direction generally aligned with the width of the vehicle100. InFIG.12Athis distance is shown as a distance D1. The rear panel192further extends from an upper panel edge196to lower panel edge250in a direction generally aligned with the vertical height of the vehicle100. InFIG.12Athis distance is shown as a distance D2. In addition, in this example, the vehicle includes a lower tailgate portion1210to which the rear panel192is directly above and adjacent. In other words, the rear panel refers to a rear-facing surface of the vehicle that extends between the first rearmost pillar and the second rearmost pillar of the vehicle (i.e., laterally), regardless of the make or model. In other words, the rear panel192extends from a region just inboard or proximal (i.e., toward the central axis186) of a first rearmost pillar1202and a region just inboard or proximal (i.e., toward the central axis186) of a second rearmost pillar1204, while the height of the rear panel192can vary based on the make or model of the vehicle (e.g., whether the vehicle is an SUV, van, station wagon, sedan, etc.). Thus, the rear panel192may present as shown here, may include or not include a rear glass, may comprise two pieces when formed in a set of rear double doors, may be disposed above or below the tailgate or be mounted within the tailgate, may comprise the region extending laterally that includes both the right and left taillights, may be disposed directly above a pop-up or down trunk hatch or boot, may be part of a door that opens left or right to expose the rear interior of the vehicle, may comprise a substantially smooth or continuous piece (e.g., with no rear glass), and/or or may include different sections, such as but not limited to a rear glass disposed within a larger frame. In different embodiments, vehicles implementing the proposed devices include upper spoiler106. In embodiments in which the proposed devices are implemented without a rear upper spoiler, the reference to an upper spoiler that occur below may be understood to refer to the rearmost edge of the roof of the vehicle. In cases in which the rear of a vehicle is not symmetrical (e.g., Nissan Cube®) and/or includes only one curved end portion for the upper spoiler or roof edge, the positioning of the second fence may be understood to be selected to ensure both fences are equidistant from the central axis186. As shown inFIG.12A, for purposes of clarity to the reader, the rear panel192can be understood to include a first boundary line1212and a second boundary line1214. The location of each boundary line can be understood to be linked to the overall shape and curvature of upper spoiler106or rearmost roof edge. More specifically, as shown in magnified view inFIG.12Bof a corner region1250of the vehicle, the upper spoiler106(or rearmost roof edge) includes an elongated body portion1262extending between a first curved end portion or junction and a second curved end portion or junction. For example, a curved end portion1260of the rear upper spoiler is directly adjacent to the first pillar1202, while another curved end portion is directly adjacent to the second pillar1204. In some embodiments, the curved end portion of the upper spoiler or rearmost roof edge can extend directly from the rearmost pillar. As shown in isolated view of corner region1250, an upper corner portion1270of the rear panel192is directly inboard of the first corner region1250. As the curved end portion1260extends from a first end1272to the elongated body portion1262, its curvature changes. For purposes of reference, a first tangential line1252, a second tangential line1254, a third tangential line1256, and a fourth tangential line1258have been included to better reflect the change in curvature. The angle of each tangential line can be viewed relative a horizontal line1286extending along the lateral width of the vehicle. The first tangential line1252touches the curved end portion1260at a first point, the second tangential line1254touches the curved end portion1260at a second point, the third tangential line1256touches the curved end portion1260at a third point, and the fourth tangential line1258touches the curved end portion1260at a fourth point where the first point is disposed most outboard, the fourth point is disposed most inboard, the second point is disposed between the first point and the third point, and the third point is disposed between the second point and the fourth point. As each point moves further inboard, the orientation of the corresponding tangential line becomes increasingly flat. In other words, the tangential lines show a transition from an orientation that is generally vertical or downward to an orientation that is generally horizontal, shown here as the second point (also referred to as a transition point) along the curved end portion1260. For purposes of this application, a boundary line (e.g., first boundary line1212) corresponds to a substantially vertical boundary line that passes through the second point, which corresponds to the point at which a tangential line for the curved end portion becomes more horizontal than vertical (i.e., approximately 45 degrees or less relative to the horizontal line1286). The boundary line can be slightly angled in cases where the trailing edge is also non-vertical to extend in an approximately parallel direction relative to the trailing edge (such as the example ofFIG.12A). In other embodiments, the boundary line is true-vertical, again depending on the orientation of the adjacent trailing edge for the rearmost pillar. Thus, the two boundary lines can be understood to extend in a primarily downward vertical orientation along the point at which exterior-facing upper surface304of the upper spoiler or rearmost roof edge is aligned with a more horizontal orientation than a vertical orientation. In the drawings, the boundary line is tilted to align with the outer slope of the rear panel and is therefore not true-vertical unless the rear panel is also vertically disposed. In this case, the first tangential line1252identifies a point that is outboard of the boundary line, while the third tangential line1256and fourth tangential line1258identify points inboard of the boundary line. In all cases, the boundary line demarcates inboard regions (1232,1242) from outboard regions (1230,1240). In this example, the first boundary line1212is spaced apart from the first trailing edge112a, demarcating a first outboard region1230from a first inboard region1232. More specifically, the first outboard region1230extends in an outboard direction from the first boundary line1212to the first trailing edge112a, and the first inboard region1232extends in an inboard direction from the first boundary line1212to the central axis186. In a similar fashion, a second outboard region1240extends in an outboard direction from the second boundary line1214to the second trailing edge112b, and a second inboard region1242extends in an inboard direction from the second boundary line1214to the central axis186. Each outboard region is adjacent to a rear pillar. Together, the first inboard region1232and the second inboard region1243comprise a substantially continuous inboard section of rear panel192, sandwiched or otherwise extending between the two outboard regions. While not all vehicles will include a smoothly continuous curved end portion along the two upper corner regions of the rear-facing surface, nor will all vehicles include an upper spoiler, the vehicle will include two intersections along its top rear edge at which the two edges (e.g., the roof edge/elongated portion and the trailing edge) come together. The point at which the corner portion has a more horizontally aligned edge may be understood to serve as the point through which the boundary line can extend vertically downward. In all cases, the proposed rear fences will be spaced apart from the trailing edges of the rear pillars in order to ensure stylistic and structural freedom, as discussed below. In embodiments for vehicles in which this intersection may be unclear or ambiguous, the boundary line can be understood to be spaced apart from the trailing edge of the nearest rearmost pillar by at least a third of an inch. In some embodiments, the vehicle100includes rear window108that is disposed in a central region of the rear panel192. For purposes of this application, the blade portions of each rear fence of the proposed embodiments will be positioned in or on a first side portion1282of the rear panel192that extends between the first boundary line1212and a first periphery1292of the rear window108, and in or on a second side portion1284of the rear panel192that extends between the second boundary line1214and a second periphery1294of the rear window108. In other words, the blade of each fence will protrude outward from a surface of the vehicle associated with either the first portion1282or the second portion1284. Such an arrangement, in which each blade is spaced apart from the rearmost pillars and disposed inboard of the outer curved end portion of the upper spoiler or outboard region, allows for a significantly wider range of stylistic and aerodynamic designs for the shape and size of each blade. As one non-limiting example, the fence—being disposed inboard of and spaced apart from the trailing edge—need not be dependent on the appearance of the outboard body design of the vehicle, and can be designed independently, without detracting from the aesthetic of the design of the trailing edges and pillars. By positioning of each fence further inboard, the proposed embodiments offer significantly greater flexibility in styling, while also maintaining the aerodynamic improvements described earlier. For example, by providing a U-shaped compartment or cavity, bounded by the elongated body portion of the upper spoiler and the two blades, air flow is more effectively directed (seeFIG.2). Some examples of this arrangement are illustrated inFIGS.13A-16below. InFIGS.13A and13B, an embodiment in which a set of dynamic fences1300are installed is depicted. InFIG.13A, a first dynamic fence1302and a second dynamic fence1304are each in the stowed configuration. Each dynamic fence includes an optional garnish portion1310and a blade portion1312, the two pieces being joined along a hinge portion associated with bottom edge402. In different embodiments, the optional garnish portion1310can be implemented in any of the embodiments disclosed herein to offer additional design and stylistic flexibility and/or an alternative aesthetic, whereby the garnish corresponds to a material or panel that extends from the bottom edge of the blade in an outboard direction toward and/or up to the trailing edge. Thus, in one embodiment, a garnish can cover a portion of the rear panel that is associated with the outboard region. InFIG.13B, the first dynamic fence1302and the second dynamic fence are each in the deployed configuration. It can be seen that some or all of hinge portion and bottom edge where the blade is in contact with or mounted on the rear panel106is located within the inboard region of the rear panel192(i.e., inboard relative to first boundary line1212). In addition, in some embodiments, the top edge400of each rear fence190when deployed extends distally outward from the surface of the rear panel192to the lower rearmost edge240of upper spoiler106. FIGS.14-16depict additional examples in which the rear fences are installed as static devices in the inboard regions of the rear panel. InFIG.14, a first pair1410of rear fences190are shown. In this example, each rear fence is spaced apart from trailing edges112of the rear panel192by at least the distance of one of the two outboard regions1230and1240. In other words, a bottom edge of a rear fence will be spaced apart in the inboard direction from the trailing edge by at least a distance D3, and in this case is even further spaced apart by a larger distance D4. As noted earlier, due to the flexibility in the position of the fences, their design can be modified without the need to accommodate the structural design of the rear pillars. In some embodiments, this can allow for the customization and/or personalization of the blades for different customer groups or types, including various outer edge shapes or curves. InFIG.15, the size of each fence in a second set1510of rear fences190has been enlarged, such that the top edge400now extends further distally outward. This modification has occurred without changing the design of the peripheral portions of the vehicle associated with the rear pillars. The position of the fences, as noted earlier, may also be moved further inboard as desired. InFIG.16, a third set1610of rear fences190is shown in which the fences are spaced further inboard toward the center by a distance D5that is larger than distance D4ofFIG.14. In other words, each fence in this embodiment is now nearer to the central axis186relative to the fences shown inFIG.14. The arrangement of the fences inFIG.16represents the approximate maximum distance from the trailing edges (i.e., just on the periphery of the rear window) that would continue to provide aerodynamic benefits as described herein. In some embodiments, the rear fences of the present embodiments may be controlled between the retracted or stowed position and the deployed position using a deployment control system. For example, in different embodiments, the proposed systems and methods can use sensed information from vehicle sensors to detect the requisite increase in speed and/or merging onto a highway environment—also referred to herein as a triggering event—indicating the fences should be deployed. By automatically deploying the aerodynamic structures in response to a particular velocity, the system and method can help reduce the degree of air drag on the vehicle. Furthermore, it may be appreciated that in some embodiments, each rear fence can be configured such that it may deployed or otherwise controlled independently of the other rear fence. In other words, one rear fence may be deployed while the other remains retracted, or one rear fence may be only partly deployed while the other is fully deployed, etc. This type of control can be effective in vehicle conditions such as high side-winds, yaw air flow, and steering at high speeds, etc. As one example, in some embodiments, the vehicle may include a speed monitoring and spoiler deployment system. The system may include multiple automotive components that may communicate via electronic control units. The components may include individual apparatuses, systems, subsystems, mechanisms and the like that may be included in the vehicle. In different embodiments, the vehicle may include sensors that may detect changes in the environment or detect events to determine whether the vehicle has exceeded a speed threshold for at least a first duration, and/or whether the vehicle has fallen below a speed threshold for at least a second duration. In another example, the vehicle can include sensors that detect when the vehicle is on a designated highway or other high-speed roadway, or an absence or presence of obstacles such as speed bumps. A number of different sensors may be used that include a wide variety of technologies, including but not limited to infrared sensors, ultrasonic sensors, microwave sensors, audio sensors, proximity sensors, accelerometers, odometer data, pressure sensors, light sensors, magnetometers, gyroscopes, passive acoustic sensors, laser detectors, GPS navigation sensors, or the like that may be used to detect the speed and/or environmental context of the vehicle. As noted earlier, deployment and/or retraction can be initiated manually and/or automatically. In the case of a manual trigger, a user may select an option for a manual trigger via an interface provided via a user device connected to the vehicle or through a vehicle user interface. Thus, communications may optionally be established between a vehicle computing system and a user device. In the case of an automated initiation, the triggering event will correspond to one or more sensor data received via vehicle sensors indicating a condition matching a parameter for the deployment or retraction of one or both side skirts. In one embodiment the vehicle has an onboard diagnostic (OBD) system included in or connected to the vehicle computing system that is configured to continuously monitor various aspects of a vehicle such as the powertrain, emissions, chassis, and body of the vehicle, as well as other vehicle aspects. The OBD can be monitoring various automotive sensors built within the vehicle. In the automotive industry there is an industry wide standard for OBD computers, and what the OBD system monitors, known as OBD-II. These standard sensors provide data relating to various vehicle systems including the engine, transmission, chassis, and other vehicle systems. In one embodiment the activation sensor(s) are sensors already incorporated in the OBD. In another embodiment one or more of the sensors are separate from the OBD. Those skilled in the art will appreciate that other triggers and sensors may be used in the system. Such sensor devices may be used to determine the vehicle's attitude, position, heading, velocity, location, acceleration, operation history, and the like. Sensor systems may also be used to sense objects around the vehicle, such as other vehicles, pedestrians, bicyclists, buildings, traffic signs, traffic lights, intersections, bridges, and the like. The system may be triggered by one of the vehicles safety systems being deployed such as the auto door lock being engaged or disengaged, or the parking of the vehicle. Those skilled in the art will appreciate that a multitude of other sensors and triggers could be used and the embodiments are not limited to the listed sensors. Referring now toFIG.17, a block diagram of an example embodiment of a deployment control system1700is shown. In some embodiments, deployment control system1700may be installed or implemented in a vehicle (e.g., vehicle100, described above) to control actuation of the rear fences (e.g., rear fences190, described above) between the retracted or stowed position and the deployed position. For example, in an example embodiment, deployment control system1700may be part of, or in communication with, other systems in the vehicle, such as an engine control unit (ECU) or other control systems for the vehicle. In one embodiment, deployment control system1700includes at least a controller logic1702comprising at least one processor1704and a memory1706for storing instructions for implementing deployment and/or retraction of the rear fences. In some embodiments, controller logic1702may receive one or more inputs from various sources within the vehicle (e.g., vehicle100) that may be used to detect a deployment condition for sending an instruction to deploy the rear fences (e.g., rear fences190), as well as detecting a retraction condition for sending an instruction to retract the rear fences. In an example embodiment, the inputs to controller logic1702may include, but are not limited to: one or more speed sensors1708configured to detect and/or determine a speed of the vehicle (e.g., wheel speed sensors, global positioning system (GPS) sensors, or other sensors typically included on a vehicle that detect or determine a travel speed of the vehicle), one or more temperature sensors1710configured to detect or measure an ambient temperature outside of the vehicle, a user override input1712configured to allow a user to manually control deployment and/or retraction of the rear fences, wind sensors1716configured to detect and/or determine a wind speed, and/or inputs from performance settings1714associated with the vehicle. For example, performance settings1714may include options for a sport or performance mode that prioritizes vehicle performance (such as speed or acceleration) or an economy mode that prioritizes fuel efficiency or energy/battery consumption. Controller logic1702may also receive inputs from other vehicle sensors, such as rain or precipitation sensor. In an example embodiment, controller logic1702receives inputs from one or more of speed sensors1708, temperature sensors1710, user override1712, wind sensors1716, and/or performance settings1714and, based on the inputs, determines whether to send an instruction to one or more motors1716of a deployment mechanism (e.g., motor704of deployment mechanism700, described above) to deploy or retract the rear fences. For example, controller logic1702may use the received inputs to determine whether a deployment condition or a retraction condition has been met based on predetermined criteria stored in memory1706. In one embodiment, the deployment condition may be a predetermined speed of the vehicle. In another embodiment, the deployment condition may be a predetermined wind speed. In other embodiments, the deployment condition may be a combination of a predetermined speed and other inputs, such as temperature (from temperature sensor1710) and/or performance mode (from performance settings1714) and/or wind speed (from wind sensors1716). In one embodiment, the retraction condition may be a predetermined speed of the vehicle, for example, the same predetermined speed as the deployment condition or a different predetermined speed that is lower than the predetermined speed used for the deployment condition. In other embodiments, the retraction condition may be a combination of the predetermined speed and other inputs such as temperature (from temperature sensor1710) and/or performance mode (from performance settings1714) and/or wind speed (from wind sensors1716). In some embodiments, a user (e.g., the driver of vehicle100) may manually instruct controller logic1702to send an instruction to motor1716to deploy or retract the rear fences via user override1712. That is, an input received from user override1712may be configured to satisfy a deployment condition or a retraction condition that causes controller logic1702to send the corresponding instruction to motor1716to deploy or retract the rear fences. With this arrangement, a user may have manual control over whether the rear fences are in the retracted or stowed position or the deployed condition. Referring now toFIG.18, a method1800of re-directing or permitting air flow along a rearward-facing surface (rear panel) of a vehicle is presented. The method1800includes a first step1810of deploying a first rear fence that extends from a first side portion of the rearward-facing surface such that the first rear fence rotates from a first orientation to a second orientation. A second step1820involves establishing a first aerodynamic zone along the first side portion between the first rear fence and an underside of a rear spoiler. A third step1830includes causing airflow to shift from a generally lateral direction to a substantially rearward and/or longitudinal direction as it moves through the first aerodynamic zone. This method thereby permits a detached flow of air outboard of the rear fence, allowing for a system and structure not constrained to aerodynamic continuity with the side panel beyond the rearmost pillar. In other embodiments, the method may include additional steps or aspects. In some embodiments, the first aerodynamic zone extends to a central axis and merges with a second aerodynamic zone formed by the underside of the rear spoiler and a second rear fence disposed on a second side portion. In one example, the first rear fence is disposed below the underside of the rear spoiler. In another example, deployment occurs in response to a change in speed of the vehicle. In some embodiments, the method can also include a step of retracting the first rear fence, thereby causing airflow to shift back from the rearward longitudinal direction to the lateral direction. In some embodiments, the first rear fence is inboard of a rearmost pillar of the vehicle. In one embodiment, air pressure is greater inboard of the first rear fence than outboard of the first rear fence when the first rear fence is deployed. As discussed above, deployment of rear fences in accordance with aspects of the present disclosure, for example per the method1800ofFIG.18, may be implemented by at least one processor in a vehicle, such as a processor1704of controller logic1702, described above. In an example embodiment, the method1800may begin at an input stage. At input stage, one or more inputs from vehicle sensors are received at the processor. For example, in one embodiment one or more inputs from speed sensors1708, temperature sensor1710, user override1712, wind sensor176, and/or performance settings1714may be received at processor1704of controller logic1702. Following the input stage, the method1800can be implemented by a detection stage. At detection stage, a deployment condition is detected. As described above, in an example embodiment, the deployment condition may be detected based on a predetermined speed of the vehicle. For example, when the vehicle speed (e.g., received from speed sensors1708) is equal to or greater than the predetermined speed, then the deployment condition may be detected during detection stage. In one embodiment, the predetermined speed for the deployment condition may be 45 miles per hour. In different embodiments, the predetermined speed for the deployment condition may be set at a higher or lower speed. In other embodiments, the deployment condition detected during detection stage may include other inputs in combination with the predetermined speed. In one embodiment, an ambient temperature received from temperature sensor1710and/or a presence of rain or precipitation from a rain or precipitation sensors may be used in combination with the predetermined speed to determine the deployment condition. For example, the deployment condition may include a minimum ambient temperature in addition to the predetermined speed so that the rear fences are not deployed in conditions where ice or freezing rain may cause damage to the rear fences or the deployment mechanism. That is, deployment of the rear fences (i.e., via instruction sent to the motor) is prohibited when the ambient temperature is below the minimum ambient temperature. In other embodiments, the deployment condition may be based on other inputs. For example, an input from user override1712to manually deploy the rear fences may be the deployment condition detected. In another embodiment, an input from performance settings1714may be used to adjust the predetermined speed at which the rear fences are deployed. For example, in a performance mode, the predetermined speed for deploying the rear fences may be lower than in other modes so that the best aerodynamic performance is achieved. Similarly, in an economy mode, the predetermined speed for deploying the rear fences may be chosen to provide better fuel economy than in other modes. Other factors for detecting a deployment condition may also be provided during the detection stage. Next, once the deployment condition has been detected, the method can proceed to a deployment stage. During deployment stage the motor or motors are instructed to deploy the rear fences. For example, processor1704of controller logic1702may send an instruction to motor1716of the deployment mechanism (e.g., motor704of deployment mechanism700) to pivot or rotate rear fences190from the retracted or stowed position to the deployed position. In some embodiments, after deployment of the rear fences, the method may (optionally) further include additional operations configured to determine when to retract the rear fences. For example, in some embodiments, the method includes an operation where one or more vehicle sensors are monitored by the processor. In one embodiment, the monitored sensors may include any of the vehicle sensors previously described, including, but not limited to speed sensors1708, temperature sensor1710, user override1712, and/or performance settings1714. If a retraction condition is detected (e.g., based on a predetermined speed of the vehicle) the system can trigger a retraction action. For example, when the vehicle speed (e.g., received from speed sensors1708) is less than a predetermined speed, then the retraction condition may be detected. In some cases, the predetermined speed for the retraction condition may be the same as the predetermined speed for the deployment condition. In other embodiments, the predetermined speed for the retraction condition may be different than the predetermined speed for the deployment condition. For example, in one embodiment, the predetermined speed for the retraction condition may be lower than the predetermined speed for the deployment condition. In one embodiment, for example, the predetermined speed for the deployment condition may be 45 miles per hour and the predetermined speed for the retraction condition may be 30 miles per hour. With this arrangement, by setting the predetermined speed for the retraction condition to be lower than the predetermined speed for the deployment condition, a situation where the rear fences are repeatedly deployed and retracted as the vehicle speed fluctuates may be avoided. In some embodiments, the retraction condition must be detected for at least a prespecified period of time (e.g., 30 seconds, one minute, several minutes, etc.) before retraction will occur. In other embodiments, the detected retraction condition may include other inputs in combination with the predetermined speed. Additionally, as with the deployment condition, an input received from user override1712may manually trigger the retraction condition so that the user can control whether or not the rear fences are retracted or deployed. In response to detection of the retraction condition, the motor or motors are instructed to retract or stow the rear fences. For example, processor1704of controller logic1702may send an instruction to motor1716of the deployment mechanism (e.g., motor704of deployment mechanism700) to pivot or rotate rear fences190from the deployed position to the retracted or stowed position. That is, each rear fence190is pivoted or rotated from the deployed position (where the planar body is aligned with the longitudinal axis) back to the retracted or stowed position (where the planar body is aligned with the lateral axis) of vehicle100. Referring now toFIG.19, an additional feature provided by the proposed embodiments is shown in cross-sectional view1900. InFIG.19, the first blade portion1116of static rear fence1112extends from first anchor portion1114, in a manner similar to that described with reference toFIG.11A. It can be observed that a substantially continuous exterior surface, extending from an outboard end1940of the static rear fence1112to an inboard end1950of the static rear fence1112, forms a semi-enclosed or compartmented area. The inboard end1950refers to the rearmost portion of the blade portion, where the inboard side and outboard side of the blade portion meet. This area will be referred to herein as a detachment zone1920, or interchangeably, as a damming zone, and should be understood to be present on both sides of the vehicle, adjacent to the outboard surfaces of each rear fence. Although the static rear fence1112is shown inFIG.19, it should be understood that embodiments of the active rear fences (e.g., active rear fence1170ofFIG.11B) can also provide a detachment zone when in the deployed configuration. As shown in the drawings, an inboard edge of the blade portion of the rear fences of the proposed embodiments is positioned directly adjacent the rear window and/or panel1160. It should be understood that although an inboard surface side1930of the blade portion1116is shown as being substantially straight or orthogonal relative to the rear panel1160, in other embodiments, the inboard surface side1930can also include a curvature, such as a concave curved surface. In different embodiments, the blade portion serves a barrier that blocks air from moving further inboard as it passes into the detachment zone1920. In this example, for purposes of reference, the detachment zone1920is demarcated on one side by a dotted line1960that extends from the outboard end1940to the inboard end1950with a rounded or substantially convex curvature, and on the other side by the curved surface of the rear fence, referred to herein as a detachment surface1902. In different embodiments, the detachment surface1902comprises the exterior surface of the first anchor portion1114, referred to as a first detachment region1904, and the outboard-facing surface of the first blade portion1116, referred to as a second detachment region1906. The two regions are identified as two separate segments for purposes of reference only. In other words, in different embodiments, it can be appreciated that the detachment surface1902comprises a substantially continuous and generally smooth exterior surface. In one example, detachment surface1902has a substantially concave shape. In some embodiments, the detachment zone1920has generally bulged or mound-shaped perimeter. While the anchor portion of the rear fence is integrally joined with the blade portion inFIG.19, thereby serving as a segment that bounds the detachment zone1920inFIG.19, it should be understood that in other embodiments, the anchor portion may not be present, while the detachment zone1902remains. For example, in embodiments in which the anchor portion is abbreviated and/or removed, such that the blade portion of the rear fence extends distally outward as a separate component relative to the back panel of the vehicle, the detachment zone1920can be alternatively formed by the blade portion and the outboard portion of the vehicle directly adjacent to and outboard of the blade portion. In other words, in embodiments in which the back of the blade portion is fixedly attached (either as a dynamic component or static component) to the rear panel of the vehicle without any further structure, the second detachment region1906remains as shown inFIG.19. In addition, the first detachment region1902can instead refer to a curved external surface of a different component that takes the place of the anchor portion. Furthermore, the term “substantially continuous” should be understood to describe a surface that may have seams or small gaps between components, depending on an airflow pattern around the vehicle, but otherwise includes a continuous L-shaped surface. As represented schematically by an arrow, as airflow moves into the detachment zone1920, the concave curved surface causes the air to become substantially ‘dammed’. Some of the air can be redirected outward, away from the blade portion and vehicle, again reducing the impact of airflow. With this arrangement, the rear fences according to the example embodiments described herein are deployed while the vehicle is moving at a predetermined speed to improve aerodynamic performance and are retracted or stowed when the vehicle is parked or operating at low speeds to improve styling appearance. The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that can be used for implementation. The examples are not intended to be limiting. Aspects of the present disclosure can be implemented using hardware, software, or a combination thereof and can be implemented in one or more computer systems or other processing systems. In one example variation, aspects described herein can be directed toward one or more computer systems capable of carrying out the functionality described herein. An example of such a computer system includes one or more processors. A “processor”, as used herein, generally processes signals and performs general computing and arithmetic functions. Signals processed by the processor may include digital signals, data signals, computer instructions, processor instructions, messages, a bit, a bit stream, or other means that may be received, transmitted and/or detected. Generally, the processor may be a variety of various processors including multiple single and multicore processors and co-processors and other multiple single and multicore processor and co-processor architectures. The processor may include various modules to execute various functions. The apparatus and methods described herein and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”) can be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements can be implemented with a “processing system” that includes one or more processors. One or more processors in the processing system can execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more aspects, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions can be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media can be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The processor can be connected to a communication infrastructure (e.g., a communications bus, cross-over bar, or network). Various software aspects are described in terms of this example computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement aspects described herein using other computer systems and/or architectures. Computer system can include a display interface that forwards graphics, text, and other data from the communication infrastructure (or from a frame buffer) for display on a display unit. Display unit can include display, in one example. Computer system also includes a main memory, e.g., random access memory (RAM), and can also include a secondary memory. The secondary memory can include, e.g., a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner. Removable storage unit, represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to removable storage drive. As will be appreciated, the removable storage unit includes a computer usable storage medium having stored therein computer software and/or data. Computer system can also include a communications interface. Communications interface allows software and data to be transferred between computer system and external devices. Examples of communications interface can include a modem, a network interface (such as an Ethernet card), a communications port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc. Software and data transferred via communications interface are in the form of signals, which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface. These signals are provided to communications interface via a communications path (e.g., channel). This path carries signals and can be implemented using wire or cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link and/or other communications channels. The terms “computer program medium” and “computer usable medium” are used to refer generally to media such as a removable storage drive, a hard disk installed in a hard disk drive, and/or signals. These computer program products provide software to the computer system. Aspects described herein can be directed to such computer program products. Communications device can include communications interface. Computer programs (also referred to as computer control logic) are stored in main memory and/or secondary memory. Computer programs can also be received via communications interface. Such computer programs, when executed, enable the computer system to perform various features in accordance with aspects described herein. In particular, the computer programs, when executed, enable the processor to perform such features. Accordingly, such computer programs represent controllers of the computer system. In variations where aspects described herein are implemented using software, the software can be stored in a computer program product and loaded into computer system using removable storage drive, hard disk drive, or communications interface. The control logic (software), when executed by the processor, causes the processor to perform the functions in accordance with aspects described herein. In another variation, aspects are implemented primarily in hardware using, e.g., hardware components, such as application specific integrated circuits (ASICs). Implementation of the hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s). In yet another example variation, aspects described herein are implemented using a combination of both hardware and software. The foregoing disclosure of the preferred embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Many variations and modifications of the embodiments described herein will be apparent to one of ordinary skill in the art in light of the above disclosure. While various embodiments of the disclosure have been described, the description is intended to be exemplary, rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims. | 80,077 |
11858564 | DETAILED DESCRIPTION Before turning to the figures, which illustrate certain exemplary embodiments in detail, it should be understood that the present disclosure is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology used herein is for the purpose of description only and should not be regarded as limiting. According to an exemplary embodiment, a vehicle of the present disclosure includes a repositionable ballast assembly. During operation, the center of gravity of the vehicle may shift back and forth. By way of example, implements may be added to or removed from the vehicle, or an implement may experience different loadings (e.g., due to variation in the amount of material carried by the implement). As the center of gravity shifts, the amount of downward force on a front axle and a rear axle of the vehicle varies. By way of example, if the center of gravity of the vehicle is directly between the front axle and the rear axle, the front axle and the rear axle may each support approximately 50% of the weight of the vehicle. If the center of gravity moves closer to one axle, the weight supported by that axle increases, and the weight supported by the other axle decreases. If both the front axle and the rear axle of the vehicle are driven (e.g., the vehicle has a 4 wheel drive or all wheel drive configuration), the total output power of a prime mover of the vehicle (e.g., an engine) is divided between each axle. The grip or traction of the wheels of each axle is related to the amount of downward force on that axle. Accordingly, as the center of gravity of the vehicle shifts forward or rearward, the portion of the output power of the prime mover that is directed to each axle changes. If the center of gravity is outside of a desired range of positions, the stresses on one of the axles may increase, causing damage and/or premature wear. The ballast assembly includes a ballast (e.g., a series of steel plates) and a ballast actuator that is configured to move the ballast relative to a frame of the vehicle. The ballast assembly is configured to counteract the effect of variations in vehicle loads, maintaining the center of gravity of the vehicle within a desired range of positions. For example, if an implement is coupled to the rear of the vehicle, this may shift the center of gravity of the vehicle rearward, increasing the loading on the rear axle. To counteract this shift, the ballast actuator may extend the ballast forward relative to the frame, shifting the center of gravity back toward the front of the vehicle and evening the load between the front and rear axles. Overall Vehicle According to the exemplary embodiment shown inFIGS.1-3, a machine or vehicle, shown as vehicle10, includes a chassis, shown as frame12; a body assembly, shown as body20, coupled to the frame12and having an occupant portion or section, shown as cab30; operator input and output devices, shown as operator interface40, that are disposed within the cab30; a drivetrain, shown as driveline50, coupled to the frame12and at least partially disposed under the body20; a vehicle braking system, shown as braking system160, coupled to one or more components of the driveline50to facilitate selectively braking the one or more components of the driveline50; and a vehicle control system, shown as control system200, coupled to the operator interface40, the driveline50, and the braking system160. In other embodiments, the vehicle10includes more or fewer components. According to an exemplary embodiment, the vehicle10is an off-road machine or vehicle. In some embodiments, the off-road machine or vehicle is an agricultural machine or vehicle such as a tractor, a telehandler, a front loader, a combine harvester, a grape harvester, a forage harvester, a sprayer vehicle, a speedrower, and/or another type of agricultural machine or vehicle. In some embodiments, the off-road machine or vehicle is a construction machine or vehicle such as a skid steer loader, an excavator, a backhoe loader, a wheel loader, a bulldozer, a telehandler, a motor grader, and/or another type of construction machine or vehicle. In some embodiments, the vehicle10includes one or more attached implements and/or trailed implements such as a front mounted mower, a rear mounted mower, a trailed mower, a tedder, a rake, a baler, a plough, a cultivator, a rotavator, a tiller, a harvester, and/or another type of attached implement or trailed implement. According to an exemplary embodiment, the cab30is configured to provide seating for an operator (e.g., a driver, etc.) of the vehicle10. In some embodiments, the cab30is configured to provide seating for one or more passengers of the vehicle10. According to an exemplary embodiment, the operator interface40is configured to provide an operator with the ability to control one or more functions of and/or provide commands to the vehicle10and the components thereof (e.g., turn on, turn off, drive, turn, brake, engage various operating modes, raise/lower an implement, etc.). The operator interface40may include one or more displays and one or more input devices. The one or more displays may be or include a touchscreen, a LCD display, a LED display, a speedometer, gauges, warning lights, etc. The one or more input device may be or include a steering wheel, a joystick, buttons, switches, knobs, levers, an accelerator pedal, a brake pedal, etc. According to an exemplary embodiment, the driveline50is configured to propel the vehicle10. As shown inFIG.3, the driveline50includes a primary driver, shown as prime mover52, and an energy storage device, shown as energy storage54. In some embodiments, the driveline50is a conventional driveline whereby the prime mover52is an internal combustion engine and the energy storage54is a fuel tank. The internal combustion engine may be a spark-ignition internal combustion engine or a compression-ignition internal combustion engine that may use any suitable fuel type (e.g., diesel, ethanol, gasoline, natural gas, propane, etc.). In some embodiments, the driveline50is an electric driveline whereby the prime mover52is an electric motor and the energy storage54is a battery system. In some embodiments, the driveline50is a fuel cell electric driveline whereby the prime mover52is an electric motor and the energy storage54is a fuel cell (e.g., that stores hydrogen, that produces electricity from the hydrogen, etc.). In some embodiments, the driveline50is a hybrid driveline whereby (i) the prime mover52includes an internal combustion engine and an electric motor/generator and (ii) the energy storage54includes a fuel tank and/or a battery system. As shown inFIG.3, the driveline50includes a transmission device (e.g., a gearbox, a continuous variable transmission (“CVT”), etc.), shown as transmission56, coupled to the prime mover52; a power divider, shown as transfer case58, coupled to the transmission56; a first tractive assembly, shown as front tractive assembly70, coupled to a first output of the transfer case58, shown as front output60; and a second tractive assembly, shown as rear tractive assembly80, coupled to a second output of the transfer case58, shown as rear output62. According to an exemplary embodiment, the transmission56has a variety of configurations (e.g., gear ratios, etc.) and provides different output speeds relative to a mechanical input received thereby from the prime mover52. In some embodiments (e.g., in electric driveline configurations, in hybrid driveline configurations, etc.), the driveline50does not include the transmission56. In such embodiments, the prime mover52may be directly coupled to the transfer case58. According to an exemplary embodiment, the transfer case58is configured to facilitate driving both the front tractive assembly70and the rear tractive assembly80with the prime mover52to facilitate front and rear drive (e.g., an all-wheel-drive vehicle, a four-wheel-drive vehicle, etc.). In some embodiments, the transfer case58facilitates selectively engaging rear drive only, front drive only, and both front and rear drive simultaneously. In some embodiments, the transmission56and/or the transfer case58facilitate selectively disengaging the front tractive assembly70and the rear tractive assembly80from the prime mover52(e.g., to permit free movement of the front tractive assembly70and the rear tractive assembly80in a neutral mode of operation). In some embodiments, the driveline50does not include the transfer case58. In such embodiments, the prime mover52or the transmission56may directly drive the front tractive assembly70(i.e., a front-wheel-drive vehicle) or the rear tractive assembly80(i.e., a rear-wheel-drive vehicle). As shown inFIGS.1and3, the front tractive assembly70includes a first drive shaft, shown as front drive shaft72, coupled to the front output60of the transfer case58; a first differential, shown as front differential74, coupled to the front drive shaft72; a first axle, shown front axle76, coupled to the front differential74; and a first pair of tractive elements, shown as front tractive elements78, coupled to the front axle76. In some embodiments, the front tractive assembly70includes a plurality of front axles76. In some embodiments, the front tractive assembly70does not include the front drive shaft72or the front differential74(e.g., a rear-wheel-drive vehicle). In some embodiments, the front drive shaft72is directly coupled to the transmission56(e.g., in a front-wheel-drive vehicle, in embodiments where the driveline50does not include the transfer case58, etc.) or the prime mover52(e.g., in a front-wheel-drive vehicle, in embodiments where the driveline50does not include the transfer case58or the transmission56, etc.). The front axle76may include one or more components. As shown inFIGS.1and3, the rear tractive assembly80includes a second drive shaft, shown as rear drive shaft82, coupled to the rear output62of the transfer case58; a second differential, shown as rear differential84, coupled to the rear drive shaft82; a second axle, shown rear axle86, coupled to the rear differential84; and a second pair of tractive elements, shown as rear tractive elements88, coupled to the rear axle86. In some embodiments, the rear tractive assembly80includes a plurality of rear axles86. In some embodiments, the rear tractive assembly80does not include the rear drive shaft82or the rear differential84(e.g., a front-wheel-drive vehicle). In some embodiments, the rear drive shaft82is directly coupled to the transmission56(e.g., in a rear-wheel-drive vehicle, in embodiments where the driveline50does not include the transfer case58, etc.) or the prime mover52(e.g., in a rear-wheel-drive vehicle, in embodiments where the driveline50does not include the transfer case58or the transmission56, etc.). The rear axle86may include one or more components. According to the exemplary embodiment shown inFIG.1, the front tractive elements78and the rear tractive elements88are structured as wheels. In other embodiments, the front tractive elements78and the rear tractive elements88are otherwise structured (e.g., tracks, etc.). In some embodiments, the front tractive elements78and the rear tractive elements88are both steerable. In other embodiments, only one of the front tractive elements78or the rear tractive elements88is steerable. In still other embodiments, both the front tractive elements78and the rear tractive elements88are fixed and not steerable. Referring toFIGS.4-6, the front tractive assembly70includes a housing or outer structural member, shown as housing90. The housing90at least partially contains the front differential74and the front axle76. A pair of wheel hubs or wheel adapters, shown as wheel end assemblies92, are rotatably coupled to each end of the housing90. Each wheel end assembly92is coupled to the front axle76such that the front axle76drives the wheel end assemblies92. The front tractive elements78are each coupled to one of the wheel end assemblies92such that the wheel end assemblies92drive the front tractive elements78to propel the vehicle10. The front tractive elements78may be selectively coupled to the wheel end assemblies92(e.g., by a series of fasteners) to facilitate replacement of the front tractive elements78. In some embodiments, each wheel end assembly92is pivotally coupled to the housing90such that each wheel end assembly92is rotatable about a substantially vertical axis to facilitate steering the vehicle10. Referring toFIGS.4and5, the rear tractive assembly80includes a housing or outer structural member, shown as housing94. The housing94at least partially contains the rear differential84and the rear axle86. A pair of wheel hubs or wheel adapters, shown as wheel end assemblies96, are rotatably coupled to each end of the housing94. Each wheel end assembly96is coupled to the rear axle86such that the rear axle86drives the wheel end assemblies96. The rear tractive elements88are each coupled to one of the wheel end assemblies96such that the wheel end assemblies96drive the rear tractive elements88to propel the vehicle10. The rear tractive elements88may be selectively coupled to the wheel end assemblies96(e.g., by a series of fasteners) to facilitate replacement of the rear tractive elements88. In some embodiments, each wheel end assembly96is pivotally coupled to the housing94such that each wheel end assembly96is rotatable about a substantially vertical axis to facilitate steering the vehicle10. The vehicle10further includes a suspension system, suspension assembly, or support assembly, shown as suspension assembly100. The suspension assembly100is configured to control movement (e.g., vertical movement) of the front tractive assembly70and the rear tractive assembly80relative to the frame12. The suspension assembly100may provide one or more upward, substantially vertical forces that counteract the effect of gravity on the vehicle10. The suspension assembly100may provide a spring force (e.g., a force that varies based on the relative position of a tractive assembly with respect to the frame12) and/or a dampening force (e.g., a force that varies based on the relative velocity of a tractive assembly with respect to the frame12). The suspension assembly100may control the ride height of the vehicle10(e.g., the distance between the frame12and the ground) and/or the ride dynamics of the vehicle10(e.g., how the vehicle10reacts to a change in height of the ground, such as a bump or pothole). Referring toFIGS.4and6, the suspension assembly100includes a pair of actuators, cylinders, springs, dampers, or combination spring/dampers, shown as cylinders102, that couple the frame12to the housing90of the front tractive assembly70. The cylinders102each include a piston104that is exposed to a chamber or volume, shown as chamber106. The chamber106is filled with a pressurized hydraulic fluid, such as hydraulic oil, that imparts a force on the piston104. This forces the piston104outward, expanding the cylinder102and forcing the frame12upward, away from the front tractive assembly70. The chambers106are fluidly coupled to a gas charged accumulator, shown as accumulator110. The accumulator110contains a volume of pressurized gas (e.g., air, nitrogen, etc.) that presses against the pressurized hydraulic fluid. The force of the gas is transferred to the pistons104through the hydraulic fluid. The gas within the accumulator110is compressible such that the cylinders102act as springs. In some embodiments, the suspension assembly100includes a compressor112that adds or removes pressurized gas from the accumulator110to adjust the ride height of the vehicle10. Adjusting the amount of gas within the accumulator110varies the pressure of the gas for a given volume of hydraulic fluid within the accumulator110. Accordingly, adjusting the amount of gas within the accumulator110adjusts the effective spring rate of the cylinders102, which causes a vehicle10of a given weight to ride higher or lower. In some embodiments, the suspension assembly100includes a valve assembly, shown as valves114, that fluidly couple the cylinders102to the accumulator110. In some embodiments, the valves114include one or more flow control valves (e.g., orifices) that resist the flow of fluid between the cylinders102and the accumulator110. Accordingly, the valves114may cause the cylinders102to act as dampers. In some embodiments, the suspension assembly100includes similar arrangements for the front tractive assembly70and the rear tractive assembly80. Referring toFIG.5, the suspension assembly100includes a pair of actuators, cylinders, springs, dampers, or combination spring/dampers, shown as cylinders122, that couple the frame12to the housing90of the rear tractive assembly80. The cylinders122each include a piston124that is exposed to a chamber or volume, shown as chamber126. The chamber126is filled with a pressurized hydraulic fluid, such as hydraulic oil, that imparts a force on the piston124. This forces the piston124outward, expanding the cylinder122and forcing the frame12upward, away from the rear tractive assembly80. The chambers126are fluidly coupled to a gas charged accumulator, shown as accumulator130. The accumulator130contains a volume of pressurized gas (e.g., air, nitrogen, etc.) that presses against the pressurized hydraulic fluid. The force of the gas is transferred to the pistons124through the hydraulic fluid. The gas within the accumulator130is compressible such that the cylinders122act as springs. In some embodiments, the suspension assembly100includes a compressor132that adds or removes pressurized gas from the accumulator120to adjust the ride height of the vehicle10. In some embodiments, the compressor112and the compressor132are combined as a single component. Adjusting the amount of gas within the accumulator130varies the pressure of the gas for a given volume of hydraulic fluid within the accumulator130. Accordingly, adjusting the amount of gas within the accumulator130adjusts the effective spring rate of the cylinders122, which causes a vehicle10of a given weight to ride higher or lower. In some embodiments, the suspension assembly100includes a valve assembly, shown as valves134, that fluidly couple the cylinders122to the accumulator130. In some embodiments, the valves134include one or more flow control valves (e.g., orifices) that resist the flow of fluid between the cylinders122and the accumulator130. Accordingly, the valves134may cause the cylinders122to act as dampers. In some embodiments, the driveline50includes a plurality of prime movers52. By way of example, the driveline50may include a first prime mover52that drives the front tractive assembly70and a second prime mover52that drives the rear tractive assembly80. By way of another example, the driveline50may include a first prime mover52that drives a first one of the front tractive elements78, a second prime mover52that drives a second one of the front tractive elements78, a third prime mover52that drives a first one of the rear tractive elements88, and/or a fourth prime mover52that drives a second one of the rear tractive elements88. By way of still another example, the driveline50may include a first prime mover that drives the front tractive assembly70, a second prime mover52that drives a first one of the rear tractive elements88, and a third prime mover52that drives a second one of the rear tractive elements88. By way of yet another example, the driveline50may include a first prime mover that drives the rear tractive assembly80, a second prime mover52that drives a first one of the front tractive elements78, and a third prime mover52that drives a second one of the front tractive elements78. In such embodiments, the driveline50may not include the transmission56or the transfer case58. As shown inFIG.3, the driveline50includes a power-take-off (“PTO”), shown as PTO150. While the PTO150is shown as being an output of the transmission56, in other embodiments the PTO150may be an output of the prime mover52, the transmission56, and/or the transfer case58. According to an exemplary embodiment, the PTO150is configured to facilitate driving an attached implement and/or a trailed implement of the vehicle10. In some embodiments, the driveline50includes a PTO clutch positioned to selectively decouple the driveline50from the attached implement and/or the trailed implement of the vehicle10(e.g., so that the attached implement and/or the trailed implement is only operated when desired, etc.). According to an exemplary embodiment, the braking system160includes one or more brakes (e.g., disc brakes, drum brakes, in-board brakes, axle brakes, etc.) positioned to facilitate selectively braking (i) one or more components of the driveline50and/or (ii) one or more components of a trailed implement. In some embodiments, the one or more brakes include (i) one or more front brakes positioned to facilitate braking one or more components of the front tractive assembly70and (ii) one or more rear brakes positioned to facilitate braking one or more components of the rear tractive assembly80. In some embodiments, the one or more brakes include only the one or more front brakes. In some embodiments, the one or more brakes include only the one or more rear brakes. In some embodiments, the one or more front brakes include two front brakes, one positioned to facilitate braking each of the front tractive elements78. In some embodiments, the one or more front brakes include at least one front brake positioned to facilitate braking the front axle76. In some embodiments, the one or more rear brakes include two rear brakes, one positioned to facilitate braking each of the rear tractive elements88. In some embodiments, the one or more rear brakes include at least one rear brake positioned to facilitate braking the rear axle86. Accordingly, the braking system160may include one or more brakes to facilitate braking the front axle76, the front tractive elements78, the rear axle86, and/or the rear tractive elements88. In some embodiments, the one or more brakes additionally include one or more trailer brakes of a trailed implement attached to the vehicle10. The trailer brakes are positioned to facilitate selectively braking one or more axles and/or one more tractive elements (e.g., wheels, etc.) of the trailed implement. Referring toFIG.7, in some embodiments, the vehicle10includes a tool or implement, shown as implement190, that is configured to facilitate the vehicle10performing one or more tasks or operations (e.g., planting, harvesting, moving material, etc.). The implement190may be partially supported by the frame12(e.g., as a trailer) or completely supported by the frame12. The implement190may be removably coupled to the frame12. In some embodiments, the implement190can be removed and replaced with a different implement190(e.g., to reconfigure the vehicle10for a different task or operation). As shown, the implement190is positioned rearward of the frame12. In other configurations, the implement190is forward of the frame12, above the frame12, below the frame12, or otherwise positioned relative to the frame12. The implements may be powered (e.g., through the PTO150) or unpowered. The implements190may include front end loaders, backhoes, graders, snow plows, buckets, grapples, field plows, trailers, mowers, rakes, lifting forks, cranes, cultivators, rotary tillers, tillage discs, harvesters (e.g., for corn, wheat, soy beans, cotton, carrots, etc.), planters, sprayers, fertilizer applicators, or other types of tools. Repositionable Ballast Assembly Referring toFIG.7, the vehicle10includes a movable weight assembly, a repositionable ballast assembly, a center of gravity adjustment assembly, or a repositionable ballast assembly, shown as ballast assembly300. The ballast assembly300is configured to move a large weight relative to the frame12of the vehicle10, varying a location of a center of gravity C of the vehicle10. The ballast assembly300is coupled to the frame12. As shown, the ballast assembly300extends forward from the frame12. In other embodiments, the ballast assembly300is otherwise positioned (e.g., the ballast assembly300extends rearward from the frame12, the ballast assembly300is at the same longitudinal position as the frame12, etc.). The ballast assembly300includes a weight assembly, shown as ballast302. The ballast302is configured to be a large portion of the overall weight of the vehicle10. In some embodiments, the ballast302makes up approximately 5% of the overall weight of the vehicle10. In some embodiments, the ballast302is approximately 2000 lbs. In one embodiment, the ballast302is 2160 lbs, and the overall weight of the vehicle10is 41,175 lbs. To facilitate packaging the large weight within the vehicle10, the ballast302may be made from a relatively dense material. In some embodiments, the ballast302is made from steel or cast iron. In some embodiments, the ballast302is a volume of liquid, such as water. In some embodiments, the ballast302is a volume of flowable solid material, such as sand. The ballast302may be reconfigurable between different weights. By way of example, material may be added or removed from the ballast302to vary the weight of the ballast302. The ballast302is coupled to the frame12by one or more support members or support assemblies, shown as ballast supports304. Specifically, the ballast supports304movably couple the ballast302to the frame12such that the ballast302is movable relative to the frame12. The ballast supports304may facilitate selective repositioning of the ballast302longitudinally relative to the frame12(e.g., forward and/or rearward relative to the frame12). The ballast assembly300further includes one or more actuators, shown as ballast actuators306. The ballast actuators306are coupled to the frame12and the ballast302. In some embodiments, the ballast actuators306include electric motors, hydraulic cylinders, and/or pneumatic cylinders. The ballast actuators306are configured to move the ballast302relative to the frame12. Accordingly, the ballast actuators306are configured to move the center of gravity C of the vehicle10. In some embodiments, the ballast actuators306are configured to move the ballast302longitudinally relative to the frame12. Accordingly, the ballast actuators306may be configured to move the center of gravity C of the vehicle10longitudinally. In some embodiments, the ballast assembly300includes one or more sensors, shown as ballast position sensors308. The ballast position sensors308may be coupled to the frame12, the ballast302, the ballast supports304, and/or the ballast actuators306. The ballast position sensors308are configured to provide position data indicating a position of the ballast302(e.g., relative to the frame12). The ballast position sensors308may indicate a relative position of the ballast302. By way of example, the ballast302may have a “home” or “zero” position, and the ballast position sensors308may measure the displacement of the ballast302from the zero position (e.g., 2 inches forward, 10 inches rearward, etc.). In some embodiments, the ballast position sensors308are configured to indicate a longitudinal position of the ballast302. Control System Referring toFIG.8, the control system200is shown according to an exemplary embodiment. The control system200may facilitate operation of the ballast assembly300. The control system200includes processing circuitry, shown as controller210. The controller210includes a processor212and a memory device, shown as memory214. The processor212may be configured to execute one or more instructions stored on the memory214to perform one or more of the processes described herein. The controller210may be configured to receive information from one or more devices (e.g., sensors, user interfaces, etc.) and/or to provide information (e.g., notifications, commands, etc.) to one or more devices (e.g., actuators, user interfaces, etc.). The controller210is operably coupled to the other devices of the control system200. By way of example, the controller210may include a communication interface to facilitate communication with the other devices. In some embodiments, the devices of the control system200utilize wired communication (e.g., Ethernet, USB, serial, etc.). In some embodiments, the devices of the control system200utilize wireless communication (e.g., Bluetooth, Wi-Fi, Zigbee, cellular communication, satellite communication, etc.). The devices of the control system200may communicate over a network (e.g., a local area network, a wide area network, the Internet, a CAN bus, etc.). As shown inFIG.8, the controller210is operatively coupled to the prime mover52. The controller210may provide commands to the prime mover52. By way of example, the controller210may control the rotational speed of the prime mover52. In one such example, the prime mover52is an engine, and the controller210provides commands that limit a rotational speed of the engine to a maximum speed. In some embodiments, the control system200further includes a sensor, shown as speed sensor53, that is operatively coupled to the controller210. The speed sensor53may provide speed data indicating a rotational speed of the prime mover52. The controller210may utilize the speed data in a feedback loop to control the rotational speed of the prime mover52. As shown inFIG.8, the controller210is operatively coupled to the ballast actuators306and the ballast position sensors308. The controller210may provide commands to the ballast actuators306. By way of example, the controller210may control the ballast actuators306to move the ballast302relative to the frame12. The controller210may receive information from the ballast position sensors308. By way of example, the controller210may receive position data from the ballast position sensor308indicating the position of the ballast302relative to the frame12. The controller210may utilize the position data in a feedback loop to control the position of the ballast302. As shown inFIG.8, the control system200further includes two or more load sensors (e.g., torque transducers), shown as front axle torque sensors220and rear axle torque sensors222. The front axle torque sensors220may provide load data (e.g., torque data) indicating a torque on the front tractive assembly70. By way of example, the front axle torque sensors220may be coupled to one or more of the front output60, the front drive shaft72, the front differential74, the front axle76, or the front tractive elements78. In one such example, a front axle torque sensor220is positioned along the front drive shaft72and provides torque data indicating a torque on the front drive shaft72. The rear axle torque sensors222may provide load data (e.g., torque data) indicating a torque on the rear axle86. By way of example, the rear axle torque sensors222may be coupled to one or more of the rear output62, the rear drive shaft82, the rear differential84, the rear axle86, or the rear tractive elements88. In one such example, a rear axle torque sensor222is positioned along the rear drive shaft82and provides torque data indicating a torque on the rear drive shaft82. Referring toFIGS.4,5, and8, the control system200includes two or more load sensors (e.g., strain gauges, pressure sensors, transducers, etc.), shown as front axle force sensors230and rear axle force sensors232. The front axle force sensors230may provide load data (e.g., force data) indicating a force on the front tractive assembly70(e.g., a force between the front tractive assembly70and the frame12, a force imparted by the front tractive assembly70on the ground, etc.). The rear axle force sensors232may provide load data (e.g., force data) indicating a force on the rear axle86(e.g., a force between the rear tractive assembly80and the frame12, a force imparted by the rear tractive assembly80on the ground, etc.). In some embodiments, a relationship between (a) the output of the front axle force sensors230and/or the rear axle force sensors232and (b) the force on the corresponding axle assembly may be predetermined and stored in the memory214. Alternatively, the controller210may directly compare the output of the front axle force sensors230and the rear axle force sensors232. In some embodiments, the front axle force sensors230and/or the rear axle force sensors232are pressure sensors configured to measure a pressure within the suspension assembly100. As shown inFIG.4, a front axle force sensor230is a pressure sensor configured to measure a pressure of the gas within the accumulator110. In other embodiments, the front axle force sensor230is configured to measure a different pressure within the suspension assembly100. By way of example, the front axle force sensor230may measure a pressure of the hydraulic fluid within the accumulator110, a pressure of the hydraulic fluid between the accumulator110and one of the cylinders102, and/or a pressure within a chamber106of one of the cylinders102. The measured pressure may provide an indication of the pressure within the chamber106, which controls the output force of the corresponding cylinder102. The relationship between the measured pressure and the force on the front tractive assembly70may be predetermined and stored in the memory214. As shown inFIG.5, a rear axle force sensor232is a pressure sensor configured to measure a pressure of the gas within the accumulator130. In other embodiments, the rear axle force sensor232is configured to measure a different pressure within the suspension assembly100. By way of example, the rear axle force sensor232may measure a pressure of the hydraulic fluid within the accumulator130, a pressure of the hydraulic fluid between the accumulator130and one of the cylinders122, and/or a pressure within a chamber126of one of the cylinders122. The measured pressure may provide an indication of the pressure within the chamber126, which controls the output force of the corresponding cylinder122. The relationship between the measured pressure and the force on the rear tractive assembly80may be predetermined and stored in the memory214. In some embodiments, the front axle force sensors230and/or the rear axle force sensors232are otherwise configured to provide a measurement indicative of the force on the front tractive assembly70and/or the rear tractive assembly80. By way of example, the front axle force sensors230and/or the rear axle force sensors232may include strain gauges positioned on one or more components of the front tractive assembly70, the rear tractive assembly80, the suspension assembly100, the frame12, and/or other components that experience forces from the front tractive assembly70and/or the rear tractive assembly80. These forces impart strain a strain on the component that can be measured by a strain gauge. The relationship between the measured strain and the force on the front tractive assembly70and/or the rear tractive assembly80may be predetermined and stored in the memory214. In some embodiments, the control system200includes one or more input devices, output devices, user interfaces, or operator interfaces, shown as operator interfaces240. The operator interfaces240may be built into the vehicle10(e.g., positioned within the cab30, positioned along the exterior of the vehicle10, etc.). Alternatively, the operator interfaces240may be portable and/or separable from the vehicle10. For example, the operator interfaces240may include one or more user devices, such as smartphones, tables, laptops, desktops, pagers, or other user devices. The operator interfaces240may include one or more input devices configured to receive inputs (e.g., commands) from an operator to facilitate operator control over the vehicle10. By way of example, the operator interfaces240may include touch screens, buttons, steering wheels, pedals, levers, switches, knobs, keyboards, mice, microphones, and/or other input devices. The operator interfaces240may include one or more output devices configured to provide information to an operator (e.g., notifications, operating conditions, etc.). By way of example, the operator interfaces240may include screens, lights, speakers, haptic feedback devices, and/or other output devices. In some embodiments, the control system200includes one or more sensors, shown as implement sensors250, that are operatively coupled to the controller210. The implement sensors250may be configured to provide implement data indicating what type of implement190is coupled to the frame12. By way of example, the implement sensors250may provide a serial number or identification number that identifies the implement190. A list correlating the identification number to various aspects of the implement190(e.g., compatibility with the vehicle10, size, weight, attachment location on the frame12, etc.) may be predetermined and stored in the memory214. In some embodiments, the implement sensors250are configured to recognize, read, or otherwise interact with an identifier on the implement190. By way of example, the implement190may include a QR code, a bar code, an RFID tag, or an NFC tag positioned to be read by a corresponding scanner of the implement sensor250. The implement sensor250may be positioned to interact with the identifier when the implement190is coupled to the frame12. In some embodiments, the control system200includes one or more sensors, shown as ballast weight sensors260, that are operatively coupled to the controller210. The ballast weight sensors260are configured to provide weight data indicating a weight or mass of the ballast302. By way of example, the ballast weight sensors260may include one or more load cells that measure the weight of the ballast302directly. By way of another example, the ballast weight sensors260may include one or more strain gauges that measure the strain of a component that supports the ballast302. The relationship between the measured strain and the weight of the ballast302may be predetermined and stored in the memory214. By way of another example, the ballast weight sensor260may include limit switches, break beam sensors, or floats that determine whether or not the ballast302is present at a predetermined location. By way of example, the ballast302may include a series of weights, each weight having a predetermined mass. The ballast weight sensors260may include a series of limit switches that are each activated when a weight is added to the ballast302. Accordingly, the weight of the ballast302may be calculated by multiplying the weight of each weight by the number of switches that have been activated. By way of another example, the ballast302may include a volume of fluid within a container, and the ballast weight sensors260may be configured to determine the height of the fluid within the container. The geometry of the container and height of the fluid may be used to determine the volume of the fluid, and the density of the fluid may be used to determine the weight of the ballast302. System Operation During operation, the vehicle10may experience various loadings that vary the location of the center of gravity C of the vehicle10. By way of example, an implement190may be attached to the front or rear of the frame12, moving the center of gravity C toward the implement190. By way of another example, the implement190may be removed and exchanged with an implement of a different weight, shape, or size, shifting the center of gravity C. By way of another example, material may be added to or removed from the vehicle10, shifting the center of gravity C. As the center of gravity C (e.g., the center of gravity of the sprung mass of the vehicle10that is supported by the suspension assemblies100) shifts longitudinally relative to the front tractive assembly70and the rear tractive assembly80, the amount of downward force on the front tractive assembly70(e.g., 20,000 lbs) and the amount of downward force on the rear tractive assembly80varies, and the ratio (e.g., 50:50, 20,000:20,000, 2:3, etc.) between the downward force on the front tractive assembly70and the downward force on the rear tractive assembly80(i.e., the downward force ratio) varies. Accordingly, the downward force ratio is based on the longitudinal position of the center of gravity C. The downward forces on the front tractive assembly70and the rear tractive assembly80are counteracted by the normal force of the ground acting on the front tractive elements78and the rear tractive elements88. Accordingly, the traction or grip of the front tractive elements78and the rear tractive elements88(e.g., the amount of torque that the front tractive elements78and the rear tractive elements88can impart without slipping) are related to (e.g., a function of, proportional to) the downward force on the front tractive assembly70and the rear tractive assembly80, respectively. In some embodiments, the relative speeds of the front drive shaft72and the rear drive shaft82are fixed (e.g., the driveline50does not include a differential that permits a change in speed of the front drive shaft72relative to the rear drive shaft82). Accordingly, the power delivered by the prime mover52is divided between the front tractive assembly70and the rear tractive assembly80based on the position of the center of gravity C. As the power directed through one portion of the driveline50increases, the stresses experienced by that portion of the driveline50also increase. Accordingly, the center of gravity C shifts further from the center of the vehicle10, the maximum stresses experienced by the driveline50increase. Accordingly, if the center of gravity C moves beyond a preferred or threshold range of longitudinal positions, the stresses experienced by part of the driveline50(e.g., the front tractive assembly70and/or the rear tractive assembly80) may exceed a rated stress, causing damage or premature wear to one or more components. Referring toFIG.9, a method400of operating the vehicle10is shown according to an exemplary embodiment. The method400utilizes the ballast assembly300to manipulate the location of the center of gravity C, thereby limiting the stresses throughout the driveline50. For example, as the center of gravity C shifts in a first direction, the method400may move the ballast302in an opposing direction to counteract the shift of the center of gravity C and minimize the stresses throughout the driveline50. In step402of the method400, an operating mode of the vehicle10is selected. The operating mode may indicate a type of operation that is being performed by the vehicle10. Additionally or alternatively, the operating mode may indicate the type of implement that is coupled to the frame12. By way of example, in a harvesting mode of operation, the implement190may be a harvester, and the vehicle10may be used to harvest crops. By way of another example, in a towing mode of operation, the implement190may be a trailer, and the vehicle10may be used to tow the trailer and a load supported by the trailer. By way of another example, in a no-implement mode of operation, the vehicle10may not include an implement190. In some embodiments, the operating mode is selected by an operator. By way of example, the operator may select the operating mode from a list of operating modes provided by an operator interface240. The list of operating modes may be predetermined (e.g., by the controller210) and stored in the memory214. The list of operating modes may be determined based on the capabilities of the vehicle10and/or a list of implements190available to the operator. By way of example, an operator or manufacturer may input (e.g., using an operator interface240) a model number of the vehicle10and/or one or more characteristics of the vehicle10(e.g., size of the prime mover52, types of hitches available on the vehicle10, etc.). By way of another example, an operator may input (e.g., using an operator interface240) a list of implements190owned by the operator. In some embodiments, the controller210selects the operating mode based on the type of implements190currently coupled to the frame12of the vehicle10. In some embodiments, the operator inputs (e.g., using the operator interface240) a list of the implements190that are currently coupled to the frame12. In some embodiments, the implement sensors250detect which implements190are coupled to the frame12. A list correlating each implement190with a corresponding operating mode may be predetermined and stored in the memory214. In step404of the method400, one or more sensors provide sensor data related the loading of the vehicle10. Specifically, the front axle torque sensors220, the rear axle torque sensor222, the front axle force sensors230, and/or the rear axle force sensors232may provide load data indicating the load on (e.g., force on, torque on, power output through) the front the front tractive assembly70and/or the rear tractive assembly80. The controller210may utilize the load data to determine a position of the center of gravity C and/or to detect a shift in the position of the center of gravity C. In some embodiments, the controller210may compare (a) the load data from the front axle torque sensors220and/or the front axle force sensors230with (b) the load data from the rear axle torque sensors222and/or the rear axle force sensors232to determine the position of the center of gravity C and/or to determine a shift in the position of the center of gravity C. In some embodiments, the controller210compares the load data (e.g., measured torques) from the front axle torque sensors220with the load data from the rear axle torque sensors222. A relationship between (a) a torque on the front tractive assembly70, (b) a torque on the rear tractive assembly80, and (c) a longitudinal position of the center of gravity C may be predetermined and stored in the memory214. As the torque on the front tractive assembly70changes relative to the torque on the rear tractive assembly80, the controller210may determine that the center of gravity C has moved longitudinally. By way of example, coupling an implement190(e.g., a trailer) to the rear end of the frame12may increase the torque on the rear tractive assembly80relative to the torque on the front tractive assembly70and shift the center of gravity C rearward. The controller210may utilize the load data to determine (a) that the center of gravity C has shifted rearward and/or (b) the distance that the center of gravity C as shifted. In some embodiments, the controller210compares the load data (e.g., measured forces) from the front axle force sensors230with the load data from the rear axle force sensors232. A relationship between (a) a force on the front tractive assembly70, (b) a force on the rear tractive assembly80, and (c) a longitudinal position of the center of gravity C may be predetermined and stored in the memory214. As the force on the front tractive assembly70changes relative to the force on the rear tractive assembly80, the controller210may determine that the center of gravity C has moved longitudinally. By way of example, removing material (e.g., water, soil, etc.) from an implement190(e.g., a sprayer) that is coupled to the rear end of the frame12may decrease the force on the rear tractive assembly80relative to the force on the front tractive assembly70and shift the center of gravity C forward. The controller210may utilize the load data to determine (a) that the center of gravity C has shifted forward and/or (b) the distance that the center of gravity C as shifted. In step406of the method400, a desired position (e.g., a target position) and a desired weight (e.g., a desired weight) of the ballast302are determined. In some embodiments, the controller210determines the desired position of the ballast302based on the operating mode of the vehicle10. For a given operating mode, (a) a loading of the vehicle10, (b) the corresponding shift in the position of the center of gravity C, and/or (c) a responsive movement of the ballast302that counteracts the shift of the center of gravity C may be predetermined and stored in the memory214. By way of example, in a harvesting mode, a harvester may be coupled to the front end of the frame12, shifting the center of gravity C forward. To counteract this shift, the ballast302may be moved to a desired position that is offset a distance rearward from a previous position of the ballast302. The desired position of the ballast302may be predetermined (e.g., experimentally, mathematically, etc.) and stored in the memory214. In some embodiments, the controller210determines the desired positon of the ballast302based on the load data received in step404. By way of example, a desired range of positions may be defined for the center of gravity C. The desired range of positions may correspond a minimized stress on the driveline50. The controller210may utilize the load data to determine when the center of gravity C has left the desired range of positions. In response to such a determination, the controller210may determine that the ballast302should be moved to return the center of gravity C to the desired range of positions. The controller210may determine a desired position of the ballast302, or the controller210may determine a direction that the ballast302should be moved to return the center of gravity C to the desired range of positions. By way of example, if the load data indicates that the center of gravity C has moved forward of the desired range of positions, the controller210may determine that the ballast302should move rearward to return the center of gravity C to the desired range of positions. In some embodiments, the controller210determines a target weight of the ballast302. The ballast302may have a range of motion, within which the ballast302is permitted to move. Movement of the ballast302outside of the range of motion may be prevented. By way of example, the ballast302may reach a hard stop that prevents movement of the ballast302beyond the range of motion. By way of another example, the ballast actuator306may have a limited range of motion that defines the range of motion of the ballast302. In some embodiments, the range of motion of the ballast302may be predetermined and stored in the memory214. In certain situations, the ballast302may be unable to fully counteract a shift of the center of gravity C within the range of motion. By way of example, if an implement190having a large mass is coupled to the rear end of the frame12, the ballast302may reach the forward end of the range of motion before the center of gravity C returns to the desired range of positions. In such an example, it may be desirable to add mass or weight to the ballast302. This added weight may increase the effect of the ballast302, shifting the center of gravity C forward into the desired range of positions. By way of another example, if an implement190having a large mass is removed from the rear end of the frame12, the ballast302may reach the rear end of the range of motion before the center of gravity C returns to the desired range of positions. In such an example, it may be desirable to remove mass or weight from the ballast302. This reduction in weight may reduce the effect of the ballast302, shifting the center of gravity C rearward into the desired range of positions. In some embodiments, the controller210is configured to determine that weight should be added to or removed from the ballast302. In some such embodiments, the controller210is configured to determine that weight should be added to or removed from the ballast302in response to the ballast302reaching the end of the range of motion without the center of gravity C reaching the desired range of positions. The controller210may utilize feedback from a ballast position sensor308to determine when the ballast302has reached the end of the range of motion. If the ballast302reaches the end of the range of motion that is farthest from the center of gravity C, the controller210may determine that additional weight should be added to the ballast302. If the ballast302reaches the end of the range of motion that is closest to the center of gravity C, the controller210may determine that weight should be removed from the ballast302. In some embodiments, the controller210may determine that weight should be added to or removed from the ballast302prior to repositioning the ballast302. The controller210may determine the current position of the ballast302within the range of motion using a ballast position sensor308. The controller210may determine the current weight of the ballast302using the ballast weight sensor260. Using the current position of the ballast302, the current weight of the ballast302, load data, and the geometry of the vehicle10, the controller210may determine if the ballast302is capable of returning the center of gravity C to the desired range of positions without exceeding the range of motion of the ballast302. If the controller210determines that the ballast302is capable of returning the center of gravity C to the desired range of positions without exceeding the range of motion of the ballast302, the controller210may determine the desired position of the ballast302for the current weight of the ballast302. If the controller210determines that the ballast302is not capable of returning the center of gravity C to the desired range of positions without exceeding the range of motion of the ballast302, the controller210may determine that additional weight should be added to the ballast302or that weight should be removed from the ballast302. In step408of the method400, a notification is provided to the operator. The controller210may provide the notification through the operator interface240(e.g., as a message on a screen, as a sound, etc.). In configurations where the controller210determines that weight should be added to or removed from the ballast302(e.g., in step406), the controller210may provide a notification instructing the operator to add or remove weight to the ballast302. The notification may also tell the operator how much weight should be added or removed. By way of example, the operator interface240may provide a text notification stating “please add 300 pounds to the front ballast” or “please remove 500 pounds from the rear ballast.” The operator interface240may provide a confirmation notification indicating that no further weight should be added to or removed from the ballast302in response to an indication (e.g., from a ballast weight sensor260) that the ballast302has reached the desired weight. In some embodiments, the notification instructs an operator to move the ballast302. By way of example, the ballast actuator306may be manually controlled (e.g., through the operator interface240, through a crank on the ballast actuator306, etc.). In such an embodiment, the notification may provide an operator with the direction that the ballast302should move and/or the distance that the ballast302should move. By way of example, the operator interface240may provide a text notification stating “please move the front ballast forward” or “please move the front ballast rearward.” The operator interface240may provide a confirmation notification indicating that no further movement of the ballast302is required in response to an indication (e.g., from the front axle torque sensors220, the rear axle torque sensors222, the front axle force sensors230and/or the rear axle force sensors232) the center of gravity C has reached the desired range of positions and/or an indication (e.g., from the ballast position sensors308) that the ballast302has reached the desired position. In step410of the method400, the output of the prime mover52is varied. By way of example, the controller210may provide commands to the prime mover52that limit operation of the prime mover52(e.g., that limit a rotational speed of the prime mover52, that limit an output power of the prime mover52, etc.). The controller210may utilize feedback from the speed sensor53in such an operation. In some embodiments, the controller210is configured to limit the operation or the performance of the prime mover52in response to an indication that the center of gravity C is outside the desired range of positions. By way of example, the controller210may limit the rotational speed of the prime mover52to below a threshold speed (e.g., limit the rotational speed to below 3000 RPM when the normal operating speed of the prime mover52is 4000 RPM, etc.). By way of another example, the controller210may limit the output power of the prime mover52to below a threshold power (e.g., limit the output power of the prime mover52to below 80% of the maximum output power, etc.). By limiting the operation of the prime mover52when the center of gravity C is outside of the desired range of positions, the controller210may limit the stresses on the driveline50and reduce component wear. In some embodiments, the controller210is configured to provide a notification to the operator indicating that the operation of the prime mover52is limited due to an undesirable condition of the ballast302. In some such embodiments, the notification is provided whenever the center of gravity C is outside of the desired range of positions. By way of example, the operator interface240may provide a text notification stating that “vehicle CG outside of operating range—output power of the engine is limited to 75% capacity.” In other embodiments, the notification is provided when the controller210determines that the center of gravity C cannot be returned to the desired range until weight is added to or removed from the ballast302. By way of example, the operator interface240may provide a text notification stating that “vehicle CG outside of operating range—output power of the engine is limited to 50% capacity. Please add 1000 pounds to front ballast to return engine to normal operating conditions.” In step412of the method400, the ballast302is repositioned. Specifically, the controller210controls the ballast actuator306to reposition the ballast302. In some embodiments, the controller210repositions the ballast302based on the selected operating mode, the load data received in step404, the desired range of position of the center of gravity C, the determined position of the center of gravity C, operator inputs, or other information. By way of example, the operating mode may have a predetermined position and/or weight of the ballast302. By way of another example, the operating mode may have a predetermined relationship between the load data and the desired position of the ballast302. By way of another example, the controller210may control the ballast302to shift the center of gravity C into the desired range of positions. Table 1 below illustrates the effect of shifting the ballast302, according to an exemplary embodiment. In this embodiment, the total weight of the vehicle10is assumed to be 41,175 lbs, the ballast302is assumed to be 2160 lbs, and the weight of the vehicle10is assumed to be distributed evenly between the front tractive assembly70and the rear tractive assembly80when the ballast302is not extended. “Shift distance” indicates the distance that the ballast302has been shifted. “Front weight” and “rear weight” indicate the weight of the vehicle10that is supported by the front tractive assembly70and the rear tractive assembly80, respectively. “Front percent” and “rear percent” indicate the portion of the total vehicle weight that is supported by the front tractive assembly70and the rear tractive assembly80, respectively. “Effective ballast change” indicates the change in weight of the ballast302that would be necessary to achieve the same effect if the ballast302had not been shifted. As shown, the ballast assembly300is able to achieve significant changes in weight distribution without increasing the weight of the ballast302. TABLE 1ShiftFrontRearFrontRearEffectiveDistanceWeightWeightPercentPercentBallast(inches)(pounds)(pounds)(%)(%)Change (%)1206052057050.050.00.56206922048350.349.73.212207962037950.549.56.318209012027450.849.29.524210052017051.049.012.6 Solid Ballast Configuration Referring toFIGS.6and10-16, the ballast assembly300is shown according to a first exemplary embodiment. In this embodiment, the frame12of the vehicle10includes a first portion or stationary portion, shown as main frame310, and a second portion or removable portion, shown as removable frame312. The main frame310may be coupled to the body20, the cab30, the driveline50, and the implements190. The removable frame312may be removably coupled to the main frame310(e.g., to facilitate aftermarket implementation of the ballast assembly300with the vehicle10, to facilitate maintenance of the ballast assembly300, etc.). In other embodiments, the main frame310and the removable frame312are integrally formed such that the frame12includes one continuous piece. The ballast302includes a structural portion or frame, shown as ballast frame320. The ballast frame320includes an interface portion, shown as plate322, that is coupled to a pair of ballast supports304and a ballast actuator306. A distal end portion of the ballast frame320includes an interface portion, shown as weight interface324. The weight interface324defines a laterally-extending recess or groove. The ballast frame320supports a series of weights, masses, or ballast plates, shown as plates330. The plates330are solid. In some embodiments, the plates330are made from cast iron or steel. In some embodiments, each of the plate330are substantially identical. In some embodiments, each plate330weighs approximately 120 pounds. The plates330are arranged laterally along the weight interface324. Each plate330defines a series of protrusions, shown as frame interfaces332, that engage the weight interface324to couple the plates330to the ballast frame320. By way of example, the frame interfaces332may engage the laterally-extending recess of the weight interface324. In some embodiments, the plates330are removably coupled to the weight interface324. By way of example, the plates330may slide laterally outward, out of the laterally-extending recess of the weight interface324. In some embodiments, one or more fasteners (e.g., bolts) engage the plate330and/or the weight interface324to selectively prevent removal of the plate330. Because the plates330are removably coupled to the weight interface324, plates330may be added or removed to increase or decrease the weight of the ballast302. By way of example, the ballast302may include six plates330in a first, relatively light configuration and twelve plates330in a second, relatively heavy configuration. In the embodiment shown inFIGS.6and10-16, the ballast supports304are sliders or linear guides that each include an outer portion, housing, or bushing, shown as slider body340, and an inner portion or rod, shown as slider rod342. The sliders are each laterally offset from a longitudinal centerline of the vehicle10. The slider bodies340are each coupled (e.g., fixedly coupled) to the removable frame312. The slider rods342are each coupled (e.g., fixedly coupled) to the plate322of the ballast frame320. The slider bodies340each extend longitudinally and each include a longitudinal passage that receives a slider rod342. The slider rods342are configured to move longitudinally relative to the slider bodies340as the ballast302moves relative to the frame12. In some embodiments, the slider body340includes a boot made from a compliant material that prevent dust or other debris from entering between the slider body340and the slider rod342. In some embodiments, the ballast supports304constrain movement of the ballast302to purely longitudinal motion (e.g., such that lateral and vertical movement of the ballast302is limited). The slider bodies340and the slider rods342may be configured to have minimal friction relative to one another, facilitating movement of the slider rods342, even when supporting the weight of the ballast302. In other embodiments, the ballast supports304are otherwise configured. By way of example, the ballast supports304may include a tubular member that is coupled to the ballast frame320and a receiver that is coupled to the frame12. The receiver may include a series of ball bearings that each an exterior surface of the tubular member, supporting the tubular member while permitting longitudinal movement of the tubular member with minimal friction. In the embodiment shown inFIGS.6and10-16, the ballast actuator306is a linear actuator including a body or housing, shown as actuator body350, and a shaft or rod, shown as actuator rod352. In some embodiments, the linear actuator is a hydraulic or pneumatic cylinder. In other embodiments, the linear actuator is an electric motor or another type of actuator. The actuator body350is coupled to the removable frame312, and the actuator rod352is coupled to the plate322of the ballast frame320. As shown, the actuator body350is pivotally coupled to an interface, shown as clevis354, that is formed by the removable frame312. In operation, the ballast actuator306extends to move the ballast302longitudinally outward (e.g., farther from the center of gravity C) and retracts to move the ballast302longitudinally inward (e.g., closer to the center of gravity C). The slider bodies340may support the weight of the ballast302such that the primary loading experienced by the ballast actuator306is along the length of the ballast actuator306. As shown inFIG.10, the ballast302is positioned forward of the body20when fully retracted and extends even further forward when fully extended. In other embodiments, the ballast302is otherwise positioned throughout the vehicle10. By way of example, the ballast302may be positioned directly beneath the body30. By way of another example, the ballast302may extend rearward of the body20. Fluid Ballast Configuration FIG.17illustrates an alternative embodiment of the ballast assembly300, shown as ballast assembly500. The ballast assembly500may be substantially similar to the ballast assembly300except as otherwise specified herein. Instead of using discrete, solid weights such as the plates330to provide the mass of the ballast, the ballast assembly500utilizes a liquid, shown as fluid F, as the ballast302. The ballast assembly500includes a first reservoir, tank, container, vessel, drum, receptacle, or holder, shown as tank502, that defines an internal volume or space, shown as chamber504. The chamber504may be filled (e.g., partially or completely) with the fluid F. The ballast assembly500further includes a second reservoir, tank, container, vessel, drum, receptacle, or holder, shown as tank506, that defines an internal volume or space, shown as chamber508. The chamber508may also be filled (e.g., partially or completely) with the fluid F. The chamber504and the chamber508may be selectively fluidly coupled to one another through one or more pumps or valves, shown as pump510. The pump510is configured to move the fluid F between the chamber504and the chamber508. The tank502and the tank506are both coupled to the frame12. In some embodiments, the tank502and the tank506are longitudinally offset from one another. By way of example, the tank502may be positioned near the front end of the frame12, and the tank506may be positioned near the rear end of the frame12. As the pump moves the fluid F between the tank502and the tank506, the center of gravity C of the vehicle10shifts accordingly. Accordingly, the distribution of the fluid F between the tank502and the tank506can be varied to control the position of the center of gravity C. In this way, the pump510acts as a ballast actuator306, and the tank502and the tank506act as ballast supports304. One or more floats may be placed within the tanks502and506to act as ballast position sensors308. The range of motion of the fluid F may be considered the path between the tank502and the tank506. One end of the range of motion may be a configuration in which all of the fluid F is contained within the tank502, and another end of the range of motion may be a configuration in which all of the fluid F is contained within the tank506. In some embodiments, the fluid F can be added to or removed from the ballast assembly500throughout operation. As shown inFIG.17, the ballast assembly500includes a valve (e.g., a flow control valve, a shutoff valve, etc.), shown as valve520, the fluidly couples a first conduit, shown as drain pipe522, to a second conduit, shown as hose524. The drain pipe522is coupled to the tank502and fluidly coupled to the chamber504. In some embodiments, the drain pipe522extends near the bottom of the chamber504such that tank502can be completely drained. The ballast assembly500further includes a valve (e.g., a flow control valve, a shutoff valve, etc.), shown as valve530, the fluidly couples a third conduit, shown as drain pipe532, to a fourth conduit, shown as hose534. The drain pipe532is coupled to the tank506and fluidly coupled to the chamber508. In some embodiments, the drain pipe532extends near the bottom of the chamber508such that tank506can be completely drained. One or more valves or pumps, shown as pump540, fluidly couple the hoses524and534with one or more sources or receivers of the fluid. Accordingly, the pump540may supply or remove the fluid F. The valves520and530can be operated to select which tank the fluid F is added to or removed from. The fluid F may include water, fuel, chemicals, fertilizer, or other fluids. In some embodiments, the fluid F is added or removed solely based on the desired weight of the ballast assembly500. In other embodiments, the fluid F is utilized by another system of the vehicle10throughout operation. By way of example, the fluid F may be a fuel (e.g., diesel, gasoline, etc.) that is consumed by the prime mover52throughout operation. Accordingly, the pump540may supply the fuel to the prime mover52and may supply the fuel to the tanks502and506from a fill port on the exterior of the vehicle10. As utilized herein with respect to numerical ranges, the terms “approximately,” “about,” “substantially,” and similar terms generally mean +/−10% of the disclosed values, unless specified otherwise. As utilized herein with respect to structural features (e.g., to describe shape, size, orientation, direction, relative position, etc.), the terms “approximately,” “about,” “substantially,” and similar terms are meant to cover minor variations in structure that may result from, for example, the manufacturing or assembly process and are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the disclosure as recited in the appended claims. It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples). The term “coupled” and variations thereof, as used herein, means the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly to each other, with the two members coupled to each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled to each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic. References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the figures. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure. The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit or the processor) the one or more processes described herein. The present disclosure contemplates methods, systems, and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. Such variation may depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps. It is important to note that the construction and arrangement of the vehicle10and the systems and components thereof (e.g., the driveline50, the braking system160, the control system200, etc.) as shown in the various exemplary embodiments is illustrative only. Additionally, any element disclosed in one embodiment may be incorporated or utilized with any other embodiment disclosed herein. | 77,491 |
11858565 | DETAILED DESCRIPTION Referring toFIG.1, there is shown a machine10according to one embodiment, and including a frame12, an operator cab14and an implement system16. Machine10further includes a ground-engaging track system18for moving machine10about a work area. Machine10is shown in the context of an excavator, where cab14and implement system16can be rotated about ground-engaging track system18. Ground-engaging track system18includes a first track20and a second track22positioned at opposite sides of frame12. Description and discussion of features and functionality of track20and associated components can be understood to refer by way of analogy to track22, as the respective tracks will typically be substantially identical. Track system18also includes a track roller frame24structured to support a plurality of rotatable track engaging elements, including a drive sprocket26visible in the illustration ofFIG.1, an idler typically positioned at an opposite end of track roller frame24from sprocket26, track rollers structured to support much of the weight of machine10and distributed between the idler and drive sprocket26, and carrier rollers supporting track20above track roller frame24. Track20also includes a first track chain30and a second track chain32, and a plurality of track pins34coupling together first track chain30and second track chain32. InFIG.1, drive sprocket26is shown in contact with track pins34, and can be rotated to advance track20about the various rotatable track-engaging elements in an endless loop to move machine10about a work area. A plurality of track shoes36may be mounted to first track chain30and second track chain32, such as by bolting in a generally conventional manner. As will be further apparent from the following description, ground-engaging track system18may be uniquely configured for improved resistance to degradation or failure of components and extended service life compared to known systems. Referring also now toFIG.2, ground-engaging track system18can include a plurality of track joint assemblies38, one of which is shown in partially sectioned view. Track joint assembly38includes a first track chain30and a second track chain32each including track links40and42having, respectively, an outboard link strap44and46with an outboard pin bore48and50. Track links40and42also include, respectively, an inboard link strap52and54with an inboard pin bore56and58. In the illustrated embodiment, track links40and42are mirror images of one another, and have an offset configuration where outboard link straps44and46are laterally offset relative to inboard link straps52and54. Track joint assembly38further includes a plurality of track pins34each including a solid pin body35, and each defining a longitudinal axis60. The plurality of track pins34, hereinafter referred to in the singular, each include a first pin end62, a second pin end64, a center section66extending from first pin end62to second pin end64and having an outer wear surface68that contacts drive sprocket26within sprocket pockets alternating with sprocket teeth. First track chain30includes a first track rail55and second track chain32includes a second track rail57. An idler and track rollers (not shown) can ride on track rails55and57in a generally conventional manner. First pin end62includes a first terminal end surface88and second pin end64includes a second terminal end surface90. Track joint assembly38further includes a first track joint70that includes an outboard link strap44in a track link40in first track chain30, first pin end62, a first interference-fitted insert72within the respective outboard pin bore48supporting first pin end62for rotation, and a first bearing surface74. First bearing surface74extends circumferentially around longitudinal axis60, and is located radially between first pin end62and outboard link strap44. First track joint70further includes an inboard link strap52in a track link40in first track chain30, and a first portion76of center section66positioned in the respective inboard pin bore56. Track joint assembly38also includes a second track joint78including an outboard link strap46in a track link42in second track chain32, second pin end64, and a second interference-fitted insert80within the respective outboard pin bore50and supporting second pin end64for rotation. Second track joint78also includes a second bearing surface82extending circumferentially around longitudinal axis60, and positioned radially between second pin end64and outboard link strap46. Second track joint78still further includes an inboard link strap54in a track link42in second track chain32, and a second portion84of center section66positioned in the respective inboard pin bore58. In the illustrated embodiment, first portion76and second portion84of center section66are interference-fitted in the respective inboard pin bores56and58. Also in the illustrated embodiment, each of first interference-fitted insert72and second-interference-fitted insert74extends axially through the respective outboard pin bore48and50. Each of first track joint70and second track joint78may further include a rotatable bushing94and95having the respective bearing surfaces74and82formed thereon. As noted above, first pin end62and second pin end64are supported for rotation by way of first interference-fitted insert72and second interference-fitted insert80. Each of first interference-fitted insert72and second interference-fitted insert80may include an inwardly extending flange portion96and97, respectively, with rotatable bushings94and95being trapped axially between center section66of track pin34and the respective inwardly extending flange portion96and97. Each of first track joint70and second track joint78may further include a pin retainer98and99positioned outboard of and adjacent to the respective inwardly extending flange portion96and97. Track pin34may also have formed therein a first circumferential groove41on first pin end62, and a second circumferential groove43on second pin end64. Each of pin retainer98and pin retainer99may include a snap ring fitted into the corresponding groove41and43as shown, and contacted by inwardly flange portions96and97, respectively, to maintain desired relative axial positioning of first pin end62and second pin end64in outboard link straps44and46. In alternative embodiments, welded-on plates could be attached to first pin end62and64in lieu of snap rings within grooves, or some other pin retention strategy could be used. Also in the illustrated embodiment seals, such as lip seals, may be positioned in first track joint70and second track joint78, including a first seal91at an inboard position in track joint70, and a second seal89at an outboard position. Another seal93may be positioned at an inboard position in track joint78, and yet another seal92positioned at an outboard position in track joint78. Bushings94and95could be self-lubricating bushings or bearings, with no internal lubricant supplied. Each of track joint70and track joint78could also be grease lubricated. In an excavator implementation the relatively low proportion of tramming time, and other service conditions ordinarily expected, can be consistent with ground-engaging track system18being a dry track system, or lubricated by way of self-lubricating or greased bearings. Track system18differs from certain other track systems, notably track systems used in many excavators, in that outboard portions of track joints70and78are rotating pin joint connections, in contrast to other systems where the track pin is interference-fitted with and therefore does not rotate relative to outboard link straps. Also in contrast to certain known track systems and track joint assemblies, no bushing is positioned upon track pin34and, instead, contact with sprocket26is direct contact between outer wear surface68and sprocket26. It will be recalled that first portion76of center section66and second portion84of center section66may be interference-fitted within inboard link straps52and54, respectively. Accordingly, inboard link straps52and54do not rotate in such an implementation relative to pin34as track20is advanced about the various rotatable track-engaging elements. A track guiding space86extends between first track chain30and second track chain32. Center section66of track pin34has an enlarged diameter, relative to first pin end62and second pin end64, and outer wear surface68is exposed to track guiding space86. It can further be noted that track pin34has a stepped profile within each of first track joint70and second track joint78. In addition to omitting a center bushing, track joint assembly38, and other track joint assemblies contemplated herein, differs from certain known designs in that the relatively enlarged diameter of track pin34provides sacrificial wear material of track pin34itself that can be gradually worn away over the course of a service life of ground-engaging track system18. Referring now toFIG.3, there is shown a track joint assembly138according to another embodiment, and using certain reference numerals to identify features that may be the same or identical to features described in connection with the preceding embodiment. Track joint assembly138includes a first track joint170and a second track joint178including track chains130and132with track rails155and157, respectively. A track guiding space186extends between track chains130and132. A track shoe36is shown attached to track chains130and132. Track joint assembly138also includes a track pin34having an outer wear surface68, with track pin34potentially being substantially identical to the track pin discussed in connection with the preceding embodiment. Track joint assembly138differs from the preceding embodiment in that rather than a system of interference-fitted inserts and bushings to support opposite ends of track pin34, only a single inserted element is provided in each track joint to support track pin34for rotation. Track pin34can include an outer bearing surface134formed directly thereon, and rotatable within an interference-fitted bushing172. Another bearing surface182is shown at an opposite end of track pin34formed directly thereon, and rotatable within an interference-fitted bushing180. Track pin34may be interference-fitted with inboard track links in track chains130and132in a manner generally analogous to that of the preceding embodiment. Referring now toFIG.4, there is shown a track joint assembly238according to another embodiment and including a first track chain230and a second track chain232. A first track joint is shown at270and a second track joint is shown at278. Retention of track pin234in an outboard link strap244can be generally analogous to the embodiment ofFIG.2where a first pin end262of track pin234is supported for rotation. A second pin end264may be analogously configured and supported. Track pin234also includes a center section266having an enlarged diameter, relative to first pin end262and second pin end264. A track guiding space286extends between track chain230and track chain232. Reference numeral246identifies an inboard link strap in track chain230. Rather than being interference-fitted with inboard link straps, track pin234and center section266may be rotatably supported and track joint assembly238may thus include bearing surfaces, one of which is labeled at277, rotatably supporting center section266within each of the subject inboard link straps. Referring now toFIG.5in particular, track pin34may be dimensioned and proportioned in a manner that enables the desired configurations and functionality of first track joint70and second track joint78in at least certain embodiments. A first lead-in chamfer100and a second lead-in chamfer102are formed on center section66adjacent to first pin end62and second pin end64for interference-fitting, respectively, first portion76and second portion84of center section66with inboard link straps52and54in first track chain30and second track chain32. Track pin34, including solid pin body35, has a full axial length400extending between first terminal end88and second terminal end90. First pin end62and second pin end64may each have a pin end axial length430, with the pin end axial length430being equal on each of first pin end62and second pin end64. It will also be recalled center section66has an enlarged diameter shown via reference numeral410inFIG.5. Enlarged diameter410is greater than pin end axial length430. Center section66also has a center section axial length420. Center section axial length420is from 60% to 63% of full axial length400, and from 314% to 318% of pin end axial length430. In a refinement, enlarged diameter410is 50% to 51% of center section axial length420, and 61% of full axial length400. Also in the refinement center section axial length420is 318% of pin end axial length430. A pin end diameter is shown at440inFIG.5, and may be approximately equal to pin end axial length430. In a further refinement, full axial length400is about 200 millimeters, more particularly 207 millimeters. In the further refinement, pin end axial length430is about 40 millimeters. In the further refinement, enlarged diameter410is about 60 millimeters, and more particularly 63 millimeters. In the further refinement center section axial length420is about 130 millimeters, more particularly 127 millimeters. The term “about” is understood herein to mean generally or approximately, for example using conventional rounding such that “about 127 millimeters” means from 126.5 millimeters to 127.4 millimeters, within measurement error. In other instances, the term about could have a different or broader meaning to a person skilled in the art than conventional rounding practices, depending upon context. INDUSTRIAL APPLICABILITY As discussed above, ground-engaging track system18, and other track systems contemplated herein, departs from conventional designs in various ways. Track systems are often purpose-built for certain types of machines and/or certain types of working applications. For these and other reasons, track configurations that provide fixed interfaces between certain components, and rotating interfaces between other components, are often not readily adapted to other configurations without potentially affecting the manner and extent of wear or other relationships between or among components. In the present case, ground-engaging track system18, and track pin34in particular, is configured in a manner that can be expected to be installed and operated in a machine such as an excavator without significant modifications or alterations to the track system, or undesired changes in the expected wear patterns or service life. In other words, ground-engaging track system18can be installed to an existing excavator platform quite easily. This is due, at least in part, to the design of track pin34, including its dimensions and proportions, which do not alter factors such as pitch or track width as compared to earlier strategies, and does not require a bushing on the track pin, or further additional components. The present description is for illustrative purposes only, and should not be construed to narrow the breadth of the present disclosure in any way. Thus, those skilled in the art will appreciate that various modifications might be made to the presently disclosed embodiments without departing from the full and fair scope and spirit of the present disclosure. Other aspects, features and advantages will be apparent upon an examination of the attached drawings and appended claims. As used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. | 15,993 |
11858566 | DETAILED DESCRIPTION Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the features, as claimed. As used herein, the terms “comprises,” “comprising,” “having,” including,” or other variations thereof, are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such a process, method, article, or apparatus. Moreover, in this disclosure, relative terms, such as, for example, “about,” “substantially,” “generally,” and “approximately” are used to indicate a possible variation of ±10% in the stated value. FIG.1illustrates a track type machine10according to the present disclosure. Machine10may embody any machine that is driven, propelled, positioned, and/or maneuvered by operating “continuous” track type traction device. Such machines may include, for example, track type tractors, skid steers, dozers, excavators, backhoes, track loaders, front shovels, rope shovels, or any other type of track-maneuverable machine. Machine10may include a pair of track assemblies12(only one shown) on opposing sides of machine10and driven by a driving mechanism14, such as a machine engine or other power source (not shown) via at least one drive gear or sprocket16. Each track assembly12may form separate endless loops. A plurality of track shoes18may be coupled to an outer surface of track assembly12in order to aid in the engagement of the ground surface. Track assembly12may include a plurality of other components that form the continuous track, ground-engaging portion of the drive system of machine10. Track assembly12may be coupled to an undercarriage assembly20that includes, for example, sprocket16, at least one idler, a plurality of rollers, and any other component of an undercarriage assembly known in the art. Track assembly12may be a chain that includes multiple structurally similar link subassemblies, each of which may include a pair of track links. A pair of track links may include a track link22and a respectively paired track link (not shown inFIG.2, which is a side view) that is parallel and spaced opposite from track link22. In some embodiments, adjacent track links22may be coupled together via a plurality of pin assemblies24. Each track link22may be engaged by teeth of sprocket16to drive track assembly12around undercarriage assembly20. As further shown inFIG.1, machine10may include at least one sensing device32(illustrated by dashed lines inFIGS.1and3A) and at least one communication device34. Sensing device32may be an electronic device configured to detect a parameter of track assembly12and transmit a signal indicative of the parameter to communication device34and/or to a remote device. In the exemplary embodiment, sensing device32may be a wear sensor and may be configured to measure a parameter associated with an amount of wear experienced by a track link22. As used herein, a “wear parameter” is a measurement or other characteristic of a monitored component or sensing device32that may indicate an amount of wear experienced by the monitored component (e.g., when compared to a previous measurement or other previous characteristic). Sensing device32may be mounted in a track link22(as shown inFIG.3A) and configured to detect a wear parameter thereof. Sensing device32may be configured to detect a wear parameter associated with wear of at least one surface of a body of track link22. For example, sensing device32may include a wear portion33positioned at a surface of track link22such that, as the surface wears away, the wear portion33of sensing device32also wears away. In some embodiments, sensing device32may use a depth sensor that uses ultrasonic waves, sound waves, lasers, etc. to determine a distance from sensing device32to a surface of track link22. One or more track links22may include sensing device32, as detailed further below. FIG.2is a perspective view of an exemplary track link22, according to aspects of the present disclosure. As shown inFIG.2, track link22may include a link body50having an outer surface52that defines a perimeter of the link body50, and thus defines a shape of track link22. Track link22may include a height between 200-280 mm, a width between 80-200 mm, and a thickness between 50-150 mm. However, it is understood that track link22may include any size and/or shape, as desired. Track link22may include an outward-facing surface64and an inward-facing surface (not shown). Outward-facing surface64may face away from machine10and the inward-facing surface may face toward machine10when track assembly12is installed on machine10. Each track link22may include one or more apertures54,56configured to receive at least a portion of track pin assemblies24in a manner known in the art. In the exemplary embodiment, track link22includes a first aperture54and a second aperture56at respective opposite ends and/or spaced apart along a longitudinal axis of each track link22. It is understood that track link22may include any number of apertures54,56for receiving respective track pin assemblies24. Each track link22may also include one or more shoe holes58(unseen inFIG.2due to angle of perspective view) in a mounting surface60of link body50. Shoe holes58may extend as through-holes into the link body50substantially along a vertical axis of track link22. Track shoes18may be attached to track link22on mounting surface60. For example, fasteners (e.g., threaded fasteners), such as bolts (not shown) or the like, may be disposed within shoe holes58to attach track shoes18to track link22, and corresponding threaded fasteners, such as nuts or the like, may be disposed on the ends of the bolts. Track link22also includes a cavity62formed in link body50(e.g., in a surface64of link body50) and may be configured to receive sensing device32, as detailed further below. Cavity62may include a size and shape to receive and accommodate at least a portion of sensing device32. For example, cavity62may include a generally rectangular or square shape and may include a height of about 31 mm, a width of about 36 mm, and a depth of about 32 mm. However, it is understood that cavity62may include any size and/or shape, as desired. Cavity62may receive a containment material to secure the sensing device32in cavity62, as detailed further below. In some embodiments, a passage66may be connected to cavity62and may be configured to receive a wear portion of sensing device32. The passage66may extend from a surface68to cavity62such that the wear portion of sensing device32may wear away with surface68. For example, surface68may be a wear surface and cavity62may be located adjacent surface68. A wear surface may be any surface of link body50in which material wears away during use of track assembly12. For example, material of surface68may be worn away through contact with components of undercarriage assembly20(e.g., the rollers) and/or other external materials (e.g., the ground). Thus, sensing device32may detect an amount of material that has been worn away from surface68. As shown inFIG.2, surface68may be generally flat to facilitate interaction with the components, such as the rollers, of undercarriage assembly20. However, it is understood that surface68may include an uneven, non-flat surface, such as one or more curved surfaces or the like. Track link22may include one or more surface features70on surface64of link body50. As shown inFIGS.2and3A-3B, the surface features may include one or more fastener protrusions72, one or more indentations74, and/or a cavity protrusion76. As used here, a “protrusion” is a portion of track link22that is raised or proud with respect to surrounding surfaces (e.g., surface64). The fastener protrusions72may extend from surface64and may be generally aligned with shoe holes58. The fastener protrusions may provide additional support or structural integrity of shoe holes58for the fasteners. The one or more indentations74may include at least one indentation74that is substantially parallel with the longitudinal axis of track link22. The indentation74may provide clearance of the track link22from components of undercarriage assembly20(e.g., clearance from flanges of the rollers when the track link22becomes worn). In some instances, the indentation74may extend substantially along the longitudinal axis of track link22and at least a portion of cavity62may be formed through a portion of indentation74. As detailed above, it may be difficult to seal the cavity while the containment material is setting, drying, or otherwise solidifying due to the uneven surfaces provided by indentation74. Thus, the track link22of the present disclosure may provide cavity protrusion76having a flat surface78to provide an improved sealing surface while the containment material sets. Cavity protrusion76may extend from surface64and may substantially surround cavity62. For example, cavity protrusion76may include additional material on track link22such that cavity62is formed through cavity protrusion76, as detailed further below. At least a portion of protrusion76may extend from at least a portion of indentation74such that protrusion76interrupts indentation74. At least a portion of protrusion76may also be located adjacent at least one fastener protrusion72. In some embodiments, protrusion76may abut at least a portion of at least one fastener protrusion72. As shown inFIGS.2and3A-3B, protrusion76may extend from surface64at a greater height than the one or more fasteners protrusions72. However, it is understood that protrusion76may extend from surface64at any height as desired, such as a height equal to, or less than, a height of the one or more fastener protrusions72. Protrusion76may include a continuous and uniform height around an entirety of cavity62such that protrusion76forms a uniform and continuous edge of cavity62. For example, protrusion76may include a thickness (e.g., a height from surface64) of about 10 mm, a width of about 40 mm, and a length of about 40 mm. The height, or thickness, of protrusion76from surface64may be formed and defined by flat surface78. Flat surface78may provide a planar surface that interacts with a sealing device80such that a seal is formed while the containment material sets, as detailed below with respect toFIGS.3A and3B. For example, flat surface78may form a plane that may be spaced from a plane formed by surface64. Flat surface78may be substantially normal to the edge surfaces (e.g., surface68). However, it is understood that flat surface may be non-normal to the edge surfaces. Flat surface78may include a shape substantially similar to a shape of cavity62. For example, the flat surface78may form a generally rectangular or square shape. However, it is understood that flat surface78of cavity protrusion76may include any size and/or shape, as desired. Cavity62may extend into link body50from flat surface78of protrusion76. For example, cavity62may include a blind hole such that cavity62extends into only a portion of link body50at a depth less than an entirety of the depth of link body50. It is understood that cavity62may extend into link body50at any depth as desired, including extending through an entirety of link body50(e.g., from flat surface78through another surface of link body50opposite flat surface78) so as to form an aperture. INDUSTRIAL APPLICABILITY The disclosed aspects of track link22may be employed in any machine that includes a tracked undercarriage that includes links coupled together to form one or more tracks. Cavity protrusion76of track link22described herein may provide flat surface78for providing an improved sealing surface during containment of sensing device32. Cavity protrusion76may also provide additional material such that cavity62may include an adequate depth for receiving sensing device32and receiving the containment material. FIGS.3A and3Billustrates a sealing method for sealing cavity62while the containment material sets, or otherwise solidifies.FIG.3Aillustrates placement of sensing device32in cavity62. Sensing device32may be placed in cavity62and secured by the containment material. For example, sensing device32may rest against a rear surface (e.g., opposite an open portion) of cavity62and/or may rest against another surface of cavity62. However, it is understood that sensing device32may be placed in cavity62so as to not contact any surface of cavity62and may be held in place by the containment material. The wear portion33of sensing device32may be inserted into passage66when sensing device32is placed in cavity62. Thus, as detailed above, the wear portion33may be located at surface68such that the wear portion33wears when surface68wears. When the sensing device32has been placed in cavity62(e.g., and wear portion33is inserted into passage66), containment material may be poured, injected, or otherwise placed in cavity62around sensing device32such that the containment material covers at least a portion of sensing device32. The containment material may include, for example, a potting epoxy that may be poured and/or injected into cavity62with sensing device32. After the containment material has been placed in cavity62, sealing device80may be placed on flat surface78(as shown inFIG.3B) while the containment material set, dries, and/or otherwise solidifies. For example, the potting epoxy may cure to form a solid material, thereby holding sensing device32in place. It is understood that while sensing device32is not shown inFIG.3B(e.g., due to sealing device80covering cavity62), sensing device32is contained in cavity62in the embodiment ofFIG.3B. As shown inFIG.3A, the sealing device80may include a plunger device having a handle82extending between a first end and a second end, and a seal84located at the second end of the handle82. Seal84may include an elastomer or other like material known in the art. The seal84may include a shape generally corresponding to a shape of cavity62(e.g., generally rectangular and/or square). A size of seal84may be generally larger than cavity62. Thus, seal84may be placed on flat surface78such that seal84substantially covers cavity62. In some embodiments, a portion of seal84may be inserted into a portion of cavity62when seal84is placed on flat surface78. Seal84may also include an indentation86such that excess containment material may flow through indentation86when seal84is placed on flat surface78. Accordingly, a portion of the containment material may flow onto flat surface78and as the containment material sets, the containment material may recede back towards cavity62. Flat surface78may also provide ease of clean-up of the excess containment material that sets on the flat surface78. It is understood that the sealing device80may include any sealing device known in the art that includes a seal84for being placed over cavity62to seal cavity62while the containment material sets. Further, it is understood that containment material may include any material having sufficient strength to hold sensing device32in place while also being capable of allowing signals to be transmitted there through (e.g., to allow sensing device32to communicate with communication device34. FIG.4is a flowchart illustrating a method400of producing a track link22having a flat surface78. A step402may include forming a general shape of link body50of the track link22. Forming the general shape link body50may include heating raw material (e.g., steel) and forming the general shape of the link body50having an approximate shape and size of the final link body50. A step404may include forming a final shape of the link body50. For example, the final shape of link body50may be formed by forging. The formed link body50includes a protrusion76extending from a surface64of the link body50. The forging may include die forging that includes one or more dies and/or hammer-type machines. For example, the forging may include open-die forging in which one die is used to shape the link body50, or may include closed-die forging in which two dies (e.g., a top die and a bottom die) are used to shape the link body50. The shape of the forging may be incorporated into the dies as a negative image such that the impact of the dies on the heated raw material forms the raw material into the forged shape of the link body50. In the exemplary embodiment, the dies may include a shape that includes the shape of the protrusion76such that the protrusion76is formed by the forging of the link body50. In some embodiments, the sensing device32may be placed in a single track link22(e.g., and/or in less than an entirety of the track links22) of the track assembly12and the dies for forging the track links22having a sensing device32may be different than the dies for forging the track links22that do not have a sensing device32. For example, the dies for forging track links22having a sensing device32may include the shape of the protrusion76, while the dies for forging track links22that do not have a sensing device32may not include the shape of the protrusion76. Thus, less material may be used in forging track links22that do not have a sensing device32. However, it is understood that an entirety of the track links22of track assembly12may include protrusion76regardless of whether a respective track link22includes sensing device32. A step406may include forming a flat surface78on protrusion76. Forming the flat surface78may include forging the flat surface78as the link body50is forged (e.g., the dies may include a shape of the protrusion76and flat surface78). Forming the flat surface78may also include machining the flat surface78on protrusion76after link body50has been forged. For example, material of protrusion76may be removed to form flat surface78. In some embodiments, flat surface78may be formed by a combination of forging and machining. A step408may include forming a cavity62into the flat surface78of protrusion76. For example, material of link body50at protrusion76may be removed to form cavity62by machining. Passage66may also be formed during the forming of cavity62. In some embodiments, flat surface78may be formed after cavity62has been formed. For example, after link body50has been forged, cavity62may be formed and then flat surface78may be formed around cavity62on protrusion76. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed method and system without departing from the scope of the disclosure. Other embodiments of the method and system will be apparent to those skilled in the art from consideration of the specification and practice of the systems disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents. | 18,985 |
11858567 | DETAILED DESCRIPTION The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Although exemplary embodiment is described as using a plurality of units to perform the exemplary process, it is understood that the exemplary processes may also be performed by one or plurality of modules. Additionally, it is understood that the term controller/control unit refers to a hardware device that includes a memory and a processor and is specifically programmed to execute the processes described herein. The memory is configured to store the modules and the processor is specifically configured to execute said modules to perform one or more processes which are described further below. Unless specifically stated or obvious from context, as used herein, the term “about” is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. “About” can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from the context, all numerical values provided herein are modified by the term “about.” Hereinafter, a movable object according to exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Movable Object FIG.1illustrates a perspective view of a movable object according to an exemplary embodiment of the present disclosure, andFIG.2illustrates a first enlarged perspective view of one of drive parts in a movable object according to an exemplary embodiment of the present disclosure. In addition,FIG.3illustrates a second enlarged perspective view of one of drive parts in a movable object according to an exemplary embodiment of the present disclosure. As illustrated inFIG.1, a movable object10according to an exemplary embodiment of the present disclosure may include an upper frame100disposed on an upper portion of the movable object, and a drive part200disposed under the upper frame100and connected to the upper frame100. The upper frame100may be formed in various shapes. \For example, as illustrated inFIG.1, the upper frame100may have a plate shape. The movable object10according to exemplary embodiments of the present disclosure may be used for various purposes. For example, the movable object10may be used to transport goods. In particular, the upper frame100may receive and support the goods to be transported. The drive part200may be configured to allow the movable object10to move. In particular, the drive part200may have a plurality of drive parts. For example, four drive parts200may be provided on the movable object10. As illustrated inFIG.1, the drive part200may include first to fourth drive parts. Particularly, the first to fourth drive parts may be provided on peripheral areas of a bottom surface of the upper frame100. For example, as illustrated inFIG.1, when the bottom surface of the upper frame100has a rectangular shape, the first to fourth drive parts may be adjacent to corners of the bottom surface of the upper frame100, respectively. In addition, when the movable object10includes the plurality of drive parts200, the drive parts200may have substantially the same structure so that they may be compatible with each other. For example, as illustrated inFIG.1, when the four drive parts200are provided, the first to fourth drive parts may have substantially the same structure so that they may be compatible with each other. In particular, only one type of drive part may be required to manufacture the movable object10, which thus improves the manufacturability of the movable object10according to exemplary embodiments of the present disclosure. Those skilled in the art to which the present disclosure pertains may determine whether the drive parts are compatible with each other. Hereinafter, the drive part in the movable object according to exemplary embodiments of the present disclosure will be described in detail. As illustrated inFIGS.1to3, the drive part200may include a first actuator210connected to the upper frame100. For example, the first actuator210may be connected to the bottom surface of the upper frame100. In addition, the first actuator210may rotate on a vertical axis. The various actuator described herein may be operated by an overall controller. In addition, the drive part200may further include a first link215connected to the first actuator210. The first link215may be rotatable by a rotational force received from the first actuator210. More specifically, the first link215may be rotatable on the vertical axis by the first actuator210. Accordingly, for example, as illustrated inFIG.1, the first link215may be connected to a lower portion of the first actuator210. According to an exemplary embodiment of the present disclosure, since the first actuator210allows the first link215to rotate on the vertical axis, the drive part200may be rotatable in parallel to the ground, and thus the movable object10may be able to create various postures. Meanwhile, the drive part200of the movable object10according to an exemplary embodiment of the present disclosure may further include a second actuator220disposed on a first side of the first link215and rotating on a horizontal axis. For example, as illustrated inFIG.1, the second actuator220may be connected to a lower portion of the first link215. In addition, the drive part200may further include a second link225facing the first link215and being rotatable on a first end portion thereof (of two end portions) facing the first link215by the second actuator220. The rotation axis of the second link225driven by the second actuator220and the rotation axis of the first link215driven by the first actuator210may be perpendicular to each other. According to an exemplary embodiment of the present disclosure, the second link225may be rotatable by the second actuator220to move in a direction away from the upper frame100or in a direction toward the upper frame100, thereby allowing the movable object10to take various postures. InFIG.1, a first end portion of the second link225(of two end portions) in a longitudinal direction thereof may be connected to the second actuator220. For example, as illustrated inFIG.1, the second actuator220may be connected to a first side of the first link215, and the second link225may face the first link215in an area where the first link215and the second actuator220are connected. Meanwhile, the drive part200may further include a third actuator230disposed on a first side of the second link225and rotating on the horizontal axis. For example, as illustrated inFIG.1, the third actuator230may be connected to a second end portion of the second link225in the longitudinal direction. In addition, the drive part200may further include a third link235facing the second link225, and being rotatable on a first end portion thereof (of two end portions) facing the second link225by the third actuator230. The rotation axis of the third link235driven by the third actuator230and the rotation axis of the first link215driven by the first actuator210may be perpendicular to each other, and the rotation axis of the third link235driven by the third actuator230and the rotation axis of the second link225driven by the second actuator220may be parallel to each other. According to an exemplary embodiment of the present disclosure, the third link235may be rotatable by the third actuator230to move in the direction away from the upper frame100or in the direction toward the upper frame100, thereby allowing the movable object10to take various postures. In other words, according to exemplary embodiments of the present disclosure, by combining the rotational motions of the first link215, the second link225, and the third link235, various postures of the movable object10may be achieved. For example, as illustrated inFIG.1, the third actuator230may be provided on a first side (of two sides) of the second link225, and the third link235may face the second link225in an area where the second link225and the third actuator230are connected. FIG.4illustrates an enlarged side view of a connection structure of a third actuator and related components provided in a drive part of a movable object according to an exemplary embodiment of the present disclosure, andFIG.5illustrates an enlarged cross-sectional view of a connection structure of a third actuator and related components provided in a drive part of a movable object according to an exemplary embodiment of the present disclosure. As illustrated inFIGS.1to5, the drive part200in the movable object10according to an exemplary embodiment of the present disclosure may further include a fourth actuator240and a first wheel250rotatable by the fourth actuator240. In other words, the first wheel250may be configured to receive power from the fourth actuator240to perform rotational motion. According to an exemplary embodiment of the present disclosure, the first wheel250may be provided on a first end portion (of two end portions) of the third link235facing the second link225. Meanwhile, as illustrated inFIGS.4and5, a first side of the third actuator230may be connected to the second link225, and a second side of the third actuator230may be connected to the third link235. The first wheel250may extend through the second link225, the third actuator230, and the third link235. More specifically, the first wheel250may include a first rotating member252provided on the outside of the second link225, a second rotating member254provided on the outside of the third link235, and a rotating shaft256that connects the first rotating member252and the second rotating member254and extends through the third actuator230. As illustrated inFIGS.1to3, according to an exemplary embodiment of the present disclosure, the fourth actuator240may be fixed into the second link225. More specifically, the fourth actuator240may be spaced apart from the first wheel250in a direction toward the second actuator220. An empty space in which the fourth actuator240is mounted may be formed in the second link225. According to another exemplary embodiment of the present disclosure, the fourth actuator240may be fixed into the third link235. In particular, the fourth actuator240may be spaced apart from the first wheel250in a direction toward a second wheel260(seeFIG.1, etc.). Referring toFIGS.1to3, the drive part200may further include the second wheel260provided on the second end portion of the third link235. More specifically, a first end portion of the third link235facing the second link225may be provided with the first wheel250, and a second end portion of the third link235may be provided with the second wheel260. As described above, since the fourth actuator240and the first wheel250are spaced apart from each other, an additional power transmission device may be required to transmit a rotational force of the fourth actuator240to the first wheel250. Thus, as illustrated inFIGS.1to5, the drive part200may further include a first pulley270that surrounds a rotating shaft of the fourth actuator240and an outer circumference of the first wheel250, and may be configured to transmit the rotational force of the fourth actuator240to the first wheel250. More specifically, the first pulley270may surround an outer circumference of the first rotating member252of the first wheel250. InFIGS.4and5, the first pulley270may face an outer surface of the second link225. Meanwhile, the drive part200may further include a second pulley280that surrounds the outer circumference of the first wheel250and an outer circumference of the second wheel260, and may be configured to transmit a rotational force of the first wheel250to the second wheel260. More specifically, the second pulley280may surround an outer circumference of the second rotating member254of the first wheel250. InFIGS.4and5, the second pulley280may face an outer surface of the third link235, and be interposed between the third link235and the first wheel250. In other words, according to an exemplary embodiment of the present disclosure, when the fourth actuator240is driven, the first wheel250and the second wheel260may rotate together. The rotational force generated by the driving of the fourth actuator240may be transmitted in the order of the first pulley270, the first rotating member252of the first wheel, the rotating shaft256of the first wheel, the second rotating member254of the first wheel, the second pulley280, and the second wheel260. Meanwhile, to maximize the degree of freedom in the postures of the movable object10according to an exemplary embodiment of the present disclosure, the first link215may be rotatable at 360 degrees by the first actuator210, and the third link235may be rotatable on a first end portion thereof facing the second link225at 360 degrees. Meanwhile, according to an exemplary embodiment of the present disclosure, a radius of curvature of the first pulley270in an area where the first pulley270surrounds the rotating shaft of the fourth actuator240may be less than that of the first pulley270in an area where the first pulley270surrounds the outer circumference of the first wheel250, that is, the first pulley270surrounds the outer circumference of the first rotating member252. This reduces a rotational speed and increases a torque when the rotational force of the fourth actuator240is transmitted to the first wheel250. On the other hand, according to an exemplary embodiment of the present disclosure, a radius of curvature of the second pulley280in an area where the second pulley280surrounds the outer circumference of the first wheel250, that is, the second pulley280surrounds the outer circumference of the second rotating member254may correspond to that of the second pulley280in an area where the second pulley280surrounds the outer circumference of the second wheel260. In particular, that the two radii of curvature correspond to each other may be interpreted as follows: i) the two radii of curvature are the same; and ii) there is no significant difference between the two radii of curvature so that a rotational angular velocity of the first wheel250and a rotational angular velocity of the second wheel260may be substantially the same. Meanwhile, referring toFIG.1, the movable object10according to an exemplary embodiment of the present disclosure may further include an extension part300provided on a top surface of the upper frame100, and being rotatable in a state of being fixed to the upper frame100. The extension part300may include a first extension member310that extends in one direction, and a second extension member320that extends in parallel to the first extension member310. The extension part300may assist the movable object10in transporting goods. In other words, after the first extension member310and the second extension member320are inserted between the goods and the ground, the first extension member310and the second extension member320of the extension part300may be rotated in a direction indicated by a double-headed arrow inFIG.1in a state in which a first end portion of the extension part300is fixed to the upper frame100, thereby moving the goods on the ground to the top surface of the upper frame100. Thus, with the aid of the extension part300, the movable object10according to an exemplary embodiment of the present disclosure may carry out the transportation of goods. Meanwhile, the movable object10according to another exemplary embodiment of the present disclosure may not include the second wheel260. In particular, when the movable object10needs to travel by moving using the wheels, the movable object10may travel in a state in which the first wheels250come into contact with the ground, and when the movable object10moves on an uneven ground, the second end portion of the third link235at which the first wheel250is not provided may come into contact with the ground so that the movable object10may mimic a human or animal's gait. Meanwhile, to prevent the second link225from interfering with the other components of the movable object10, especially, the upper frame100of the movable object10, during the rotation of the second link225by the second actuator220, the second link225may be spaced apart from the upper frame100. In other words, the second link225may be spaced apart from the upper frame100in a vertical direction, and thus the second link225may rotate without interference with the upper frame100. FIG.6illustrates a perspective view of a first exemplary posture that may be taken by a movable object according to an exemplary embodiment of the present disclosure, andFIG.7illustrates a perspective view of a second exemplary posture that may be taken by a movable object according to an exemplary embodiment of the present disclosure.FIG.8illustrates a perspective view of a third exemplary posture that may be taken by a movable object according to an exemplary embodiment of the present disclosure, andFIG.9illustrates a perspective view of a fourth exemplary posture that may be taken by a movable object according to an exemplary embodiment of the present disclosure. As illustrated inFIGS.6to9, the movable object10according to an exemplary embodiment of the present disclosure may include the first to third actuators210,220, and230(seeFIG.1) causing the first to third links215,225, and235(seeFIG.1) to rotate independently in addition to the fourth actuator240(seeFIG.1) for driving the wheels250and260(seeFIG.1) and thus, the movable object10may move while maintaining various types of postures according to various situations. For example, when it is necessary to maintain the center of gravity of the movable body10low, the movable body10may take the posture illustrated inFIG.6or9by operating the first to third links215,225, and235using the first to third actuators210,220, and230. As another example, when it is necessary to maintain the center of gravity of the movable body10high, the movable body10may take the posture illustrated inFIG.8by operating the first to third links215,225, and235using the first to third actuators210,220, and230. As another example, the movable object10may also take a posture similar to a human's kneeling posture as illustrated inFIG.7. In addition, according to an exemplary embodiment of the present disclosure, the first to third links215,225, and235may be operated to move the movable object10in a state in which only the first wheels250come into contact with the ground. In some cases, the first to third links215,225, and235may be operated to move the movable object10in a state in which both the first wheels250and the second wheels260come into contact with the ground. In addition to the above-described features, the movable object10according to exemplary embodiments of the present disclosure may take many different types of postures. As set forth above, the movable object according to exemplary embodiments of the present disclosure may have a novel structure, thereby improving durability with respect to vibration and load and performing other tasks while moving using the wheels. Hereinabove, although the present disclosure has been described with reference to exemplary embodiments and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims. | 20,233 |
11858568 | DETAILED DESCRIPTION It is understood that the term “vehicle” or “vehicular” or other similar term as used herein is inclusive of motor vehicles in general such as passenger automobiles including sports utility vehicles (SUV), buses, trucks, various commercial vehicles, watercraft including a variety of boats and ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles and other alternative fuel vehicles (e.g. fuels derived from resources other than petroleum). As referred to herein, a hybrid vehicle is a vehicle that has two or more sources of power, for example both gasoline-powered and electric-powered vehicles. Although exemplary embodiment is described as using a plurality of units to perform the exemplary process, it is understood that the exemplary processes may also be performed by one or plurality of modules. Additionally, it is understood that the term controller/control unit refers to a hardware device that includes a memory and a processor. The memory is configured to store the modules and the processor is specifically configured to execute said modules to perform one or more processes which are described further below. Furthermore, control logic of the present disclosure may be embodied as non-transitory computer readable media on a computer readable medium containing executable program instructions executed by a processor, controller/control unit or the like. Examples of the computer readable mediums include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards and optical data storage devices. The computer readable recording medium can also be distributed in network coupled computer systems so that the computer readable media is stored and executed in a distributed fashion, e.g., by a telematics server or a Controller Area Network (CAN). The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Unless specifically stated or obvious from context, as used herein, the term “about” is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. “About” can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from the context, all numerical values provided herein are modified by the term “about.” The above-described objects, features, and advantages of the present disclosure will be described in detail with reference to the accompanying drawings, and accordingly, those of ordinary skill in the art to which the present disclosure pertains will be able to fully understand and easily embody the technical concept of the present disclosure. In describing the exemplary embodiments of the present disclosure, detailed description of well-known technologies related to the present disclosure will be reduced or omitted in the case where it is determined that it obscures the subject matter of the present disclosure in unnecessary detail. FIG.2is a block diagram illustrating the configuration of an apparatus for controlling rear wheel steering according to the present disclosure, andFIG.3is a diagram illustrating a rear wheel steering control state in accordance with time according to the present disclosure. Hereinafter, with reference toFIGS.2and3, an apparatus for controlling rear wheel steering according to an exemplary embodiment of the present disclosure will be described. An apparatus for controlling rear wheel steering according to the present disclosure minimizes a yaw rate-roll delay time during vehicle turning by driver's steering to provide a sense of unity during the turning to the driver. Further, to rapidly generate a roll, a lateral force of a rear wheel tire should be generated rapidly. For this, according to the present disclosure, rear wheel steering may be adjusted to predict a lateral acceleration desired by a driver in advance in an initial state (transient state) of driver's steering, and the lateral acceleration desired by the driver in the transient state may be generated rapidly. Thereafter, when the vehicle enters into a normal state, a control amount may be reduced to minimize heterogeneity of the rear wheel steering control. The apparatus10for controlling rear wheel steering according to the present disclosure may include a steering intention determinator11, a turning state estimator12, a normal state lateral acceleration predictor13, and a rear wheel steering angle calculator14. Each of the components may be operated by a controller. The apparatus10for controlling rear wheel steering may be implemented as a partial configuration and function of an electronic control unit (ECU), or it may be separately configured. The apparatus10for controlling the rear wheel steering may be configured to control or adjust a rear wheel steering actuator by successively calculating a lateral acceleration and a rear wheel steering angle using sensor measurement values of a steering torque sensor, a steering angle sensor, and a wheel speed sensor. First, the steering intention determinator11may be configured to determine a turning intention of a driver based on steering torque information acquired from the steering torque sensor. Accordingly, the steering intention determinator11prevents intervention in a rear wheel steering control if steering occurs due to disturbance, such as a bump or porthole, having no turning intention. The turning state estimator12may be configured to calculate the steering speed intended by a driver by differentiating steering angle information acquired from the steering angle sensor to determine whether the current state is a turning transient state or a turning normal state during driver's steering. Further, the normal state lateral acceleration predictor13may be configured to predict a normal state lateral acceleration desired by the driver based on the steering angle and wheel speed information acquired from the steering angle sensor and the wheel speed sensor. The rear wheel steering angle calculator14may then be configured to calculate the rear wheel steering angle for rapidly generating a rear wheel tire lateral force based on the driver's steering angle in the turning transient state and lateral acceleration information desired by the driver. Thereafter, when the vehicle enters into the turning normal state, the rear wheel steering control amount may be reduced to reduce heterogeneity caused by the rear wheel steering control. On the other hand, the lateral acceleration size prediction and the rear wheel steering angle size determination will be described in more detail through discrimination of the turning transient state and the turning normal state from each other with reference toFIG.3. First, in the turning transient state, the steering intention determinator11may be configured to detect the change of the driver's steering torque and operate the apparatus for controlling the rear wheel steering. Further, the turning state estimator12may be configured to detect that the steering speed is rapid through steering angle differentiation, and determine the entry into the turning transient state. Then, the normal state lateral acceleration predictor13may be configured to predict the size of the lateral acceleration desired by the driver, and the size of the lateral acceleration may be predicted by the following expression. Ax=vx2L(1+vx*κ)*λ*σswAx:predictedlateralaccelerationvx:vehiclespeed(wheelspeed)L:wheelbaseκ:vehiclecharacteristiccoefficientδsw:steeringangleλ:steerinoratioMathematicalexpression1 As described above, the lateral acceleration may be predicted, and finally, the rear wheel steering angle calculator14may be configured to calculate the rear wheel steering angle for generating a lateral force of a rear wheel tire to rapidly generate the lateral acceleration. Further, in the turning normal state, the steering intention determinator11may be configured to detect the driver's steering angle and the remaining steering torque, and continuously operate the apparatus for controlling the rear wheel steering to cope with the driver's additional steering. The turning state estimator12may be configured to recognize that the steering speed is very low (e.g., less than a predetermined threshold), and determine the entry into the turning normal state. Accordingly, the normal state lateral acceleration predictor13does not predict the lateral acceleration size desired by the driver in the turning normal state. Further, the rear wheel steering angle calculator14may be configured to reduce the rear wheel steering control amount remaining to reduce the driver's heterogeneity caused by the rear wheel steering control in the turning normal state. Meanwhile, the rear wheel steering angle calculation in the turning transient state and in the turning normal state may be performed as follows. First, in the turning transient state, the rear wheel steering angle calculator14may be configured to adjust the rear wheel steering angle in the same direction as the driver's steering angle so that the lateral force to be generated in the turning normal state is brought up in advance at an initial stage of the steering, and the rear wheel steering angle size may be determined by the following expression. ifsign(σsw*·σsw)>0{σrws=sign(Ay)·Ay·Crwsτs+1σrws=0elseMathematicalexpression2 Further, if the rear wheel control amount remains in the turning normal state, the steering heterogeneity may be provided to the driver, and thus the remaining rear wheel steering control amount may be gradually reduced. In particular, the rear wheel steering angle size may be determined by the following expression. δ=−k·δrwsMathematical expression 3 In mathematical expressions 2 and 3, coefficients are defined as follows.Crws: rear wheel steering gainr: rear wheel steering time coefficientκ: control amount reduction coefficient As described above, the apparatus for controlling rear wheel steering according to the present disclosure performs the control operation through the above-described configuration. Hereinafter, based on this, a method for controlling rear wheel steering according to the present disclosure will be described with reference toFIG.4. The method described herein below may be executed by a controller. The apparatus10for controlling the rear wheel steering may be configured to receiving information from sensors, such as a steering torque sensor, a steering angle sensor, and a wheel speed sensor (S11), and the steering intention determinator11may be configured to compare the steering torque with a torque threshold value Tq_Thd (S12). In response to determining that the steering torque is equal to or greater than the torque threshold value as the result of the comparison, a driver turning intention may be detected and the rear wheel steering control may begin. However, in response to determining that the steering torque is less than the torque threshold value, the rear wheel steering control may end. Through the rear wheel steering control start, the turning state estimator12may be configured to calculate the steering speed (S13). Further, the turning state estimator12may be configured to compare the steering speed with a speed threshold value Spd_Thd (S14). In response to determining that the steering speed is equal to or greater than the speed threshold value as the result of comparing the steering speed with the speed threshold value, the turning state estimator12may be configured to determine that the current state is the turning transient state. In response to determining that the steering speed is less than the speed threshold value, the turning state estimator12may be configured to determine that the current state is the turning normal state. When the turning transient state is determined as the result, the normal state lateral acceleration predictor13may be configured to calculate the lateral acceleration desired by the driver using the steering angle and the wheel speed (S15). Further, based on the lateral acceleration calculated at S15, the rear wheel steering angle calculator14may be configured to calculate the rear wheel steering angle (S16), and operate the rear wheel actuator in accordance with the calculated value. In contrast, when the turning normal state is determined as the result of the determination at S14, the rear wheel steering angle calculator14may be configured to calculate the corresponding rear wheel steering angle, and reduce the remaining rear wheel steering control amount (S17). Further, it may be possible to combine the above-described control method according to the present disclosure with the existing control method. In other words, with reference toFIG.5, by combining the out-of-phase control at low speed with the in-phase control at high speed, as described above with reference to the drawings, the in-phase control may be performed after the rapid control in the turning transient state based on the driver's steering angle. As described above, according to the apparatus and the method for controlling the rear wheel steering according to the present disclosure, as the lateral force is generated rapidly at the rear wheel at an initial stage of steering, the delay time of the yaw rate and roll occurrence may be reduced as shown inFIG.6, and thus the sense of unity during turning may be provided to the driver. Further, as shown inFIG.7, at an initial stage of steering, the lateral slip size and the variation width of the front and rear wheels may be reduced, and thus more stable turning becomes possible to improve the vehicle stability. Additionally, by reducing the rear wheel steering control amount in the turning normal state, heterogeneity that the driver feels due to the rear wheel steering may be minimized. While the present disclosure has been described with reference to the exemplified drawings, it will be apparent to those of ordinary skill in the art that the present disclosure is not limited to the described exemplary embodiments, and various changes and modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, such changes and modifications should belong to the claims of the present disclosure, and the right of the present disclosure should be construed based on the appended claims. | 15,247 |
11858569 | DETAILED DESCRIPTION OF THE DISCLOSURE A. First Embodiment A1. Configuration of Vehicle10: FIGS.1(A)-1(C)andFIG.2show explanatory diagrams which illustrate a vehicle10as one embodiment.FIG.1(A)shows a right side view of the vehicle10,FIG.1(B)shows a top view of the vehicle10,FIG.1(C)shows a bottom view of the vehicle10, andFIG.2shows a rear view of the vehicle10. InFIGS.1(A)-1(C)andFIG.2, the vehicle10is shown that is located on a horizontal ground GL (FIG.1(A)), and thus does not lean. InFIGS.1(A)-1(C)andFIG.2, six directions DF, DB, DU, DD, DR, and DL are shown. A front direction DF is a front direction (i.e direction of forward movement) of the vehicle10, and a back direction DB is opposite to the front direction DF. An upward direction DU is a vertically upward direction, and a downward direction DD is a vertically downward direction (i.e. a direction opposite to the upward direction DU). The vertically downward direction is the direction of gravity. The right direction DR is a right direction viewed from the vehicle10traveling in the front direction DF, and the left direction DL is opposite to the right direction DR. All the directions DF, DB, DR, and DL are horizontal directions. The right and left directions DR and DL are perpendicular to the front direction DF. In this embodiment, the vehicle10is a small single-seater vehicle. The vehicle10(FIGS.1(A) and1(B)) is a tricycle which has a vehicle body90, a front wheel12F, a left rear wheel12L, and a right rear wheel12R. The front wheel12F is an example turn wheel, and is located at the center of the vehicle10in its width direction. The turn wheel is a wheel that can turn in the width direction of the vehicle10(i.e. to right direction and to left direction). The traveling direction of the turn wheel can turn to right and left relative to the front direction DF. In this embodiment, the front wheel12F is turnably supported on the vehicle body90. The rear wheels12R,12L are drive wheels. The rear wheels12R,12L are spaced apart from each other symmetrically with regard to the center of the vehicle10in its width direction. The vehicle body90(FIG.1(A)) has a main body20. The main body20has a bottom portion20b, a front wall portion20acoupled to the bottom portion20bon the front direction DF side, a rear wall portion20ccoupled to the bottom portion20bon the back direction DB side, and a support portion20dwhich extends from the top of the rear wall portion20ctoward the back direction DB. For example, the main body20has a metal frame, and panels attached to the frame. The vehicle body90further has a seat11attached on the bottom portion20b, an accelerator pedal45and a brake pedal46located on the front direction DF side of the seat11, a controller100and a battery120attached on the bottom portion20b, a front wheel support device41attached to the end on the upward direction DU side of the front wall portion20a, and a steering wheel41aattached to the front wheel support device41. Other members (e.g. roof, headlight, etc.) may be attached to the main body20although they are not shown in the figures. The vehicle body90includes the members attached to the main body20. The front wheel support device41(FIG.1(A)) is a device that supports the front wheel12F turnably about a turning axis Ax1. The front wheel support device41has a front fork17, a bearing68, and a steering motor65. The front fork17, which rotatably supports the front wheel12F, is a telescopic fork having a coil spring and a shock absorber, for example. The bearing68couples the front fork17to the front wall portion20aof the main body20. The bearing68supports the front fork17(and thus the front wheel12F) turnably about the turning axis Ax1to right and left relative to the vehicle body90. A turnable range of the front fork17may be a predetermined angular range (e.g. a range of less than 180 degrees). For example, the angular range may be limited by the front fork17coming into contact with another portion of the vehicle body90. The steering motor65is an electric motor, and is coupled to the front wall portion20aof the main body20and to the front fork17. The steering motor65generates a torque which causes the front fork17(and thus the front wheel12F) to turn in the width direction (i.e. to right direction and to left direction). In this manner, the steering motor65is configured to apply a turning torque, which is a torque for controlling the turn of the front wheel12F in the width direction, on the front wheel12F (hereinafter sometimes referred to as turning actuator65). The steering wheel is a member which can rotate to right and left directions. A rotational angle (sometimes referred to as input angle) of the steering wheel41arelative to a predetermined rotational position (referred to as forward movement-rotational position) corresponding to forward traveling is example turn input information indicating turning direction and degree of turn. In this embodiment, “input angle=0” indicates forward traveling, “input angle>0” indicates a right turn, and “input angle<0” indicates a left turn. The magnitude (i.e. absolute value) of input angle indicates the degree of turn. The driver can input turn input information by handling the steering wheel41a. It should be noted that the steering wheel41aand the front fork17are not coupled mechanically in this embodiment. However, an elastic body (e.g. spring such as coil spring and flat spring, resin such as rubber and silicon) may couple the steering wheel41ato the front fork17. A wheel angle Aw (FIG.1(B)) is an angle indicating the direction of the front wheel12F relative to the vehicle body90. In this embodiment, the wheel angle Aw is an angle of traveling direction D12of the front wheel12F relative to the front direction DF. The wheel angle Aw represents an angle about an axis parallel to the upward direction of the vehicle body90(which is the same as a vertically upward direction DU when the vehicle body90does not lean relative to the vertically upward direction DU). The traveling direction D12is perpendicular to the rotational axis Axw1of the front wheel12F. In this embodiment, “Aw=0” indicates that “direction D12=front direction DF.” “Aw>0” indicates that the direction D12turns toward the right direction DR side (i.e. turning direction=right direction DR). “Aw<0” indicates that the direction D12turns toward the left direction DL side (i.e. turning direction=left direction DL). The wheel angle Aw represents an angle at which the front wheel12F turns. If the front wheel12F is steered, the wheel angle Aw corresponds to a so-called steering angle. The steering motor65is controlled by the controller100(FIG.1(A)). When the turning torque generated by the steering motor65is smaller, the direction D12of the front wheel12F is allowed to turn to left or right independently of the input angle. The control of steering motor65will be discussed in detail later. An angle CA inFIG.1(A)is a so-called caster angle. The caster angle CA is an angle between the upward direction of the vehicle body90(which is the same as a vertically upward direction DU when the vehicle body90does not lean relative to the vertically upward direction DU) and a direction along the turning axis Ax1toward the vertically upward direction DU side. In this embodiment, the caster angle CA is larger than zero. Accordingly, the direction along the turning axis Ax1toward the vertically upward direction DU side is tilted diagonally backward. As shown inFIG.1(A), in this embodiment, the intersection point P2between the turning axis Ax1of the front wheel support device41and the ground GL is located on the front direction DF side of the contact center P1of the front wheel12F with the ground GL. The distance Lt in the back direction DB between these points P1, P2is referred to as a trail. A positive trail Lt indicates that the contact center P1is located on the back direction DB side of the intersection point P2. As shown inFIG.1(A),FIG.1(C), the contact center P1represents a gravity center of contact area Cal between the front wheel12F and the ground GL. The gravity center of the contact area is a position of gravity center on the assumption that its mass is distributed evenly across the contact area. A contact center PbR of contact area CaR between the right rear wheel12R and the ground GL, and a contact center PbL of contact area CaL between the left rear wheel12L and the ground GL are identified in a similar manner. As shown inFIG.2, the two rear wheels12R,12L are rotatably supported on a rear wheel support80. The rear wheel support80has a link mechanism30, a lean motor25mounted on the top of the link mechanism30, a first support portion82attached onto the top of the link mechanism30, and a second support portion83attached to the front of the link mechanism30(FIG.1(A)). For purposes of illustration, inFIG.1(A), portions of the rear wheel support80which are hidden by the right rear wheel12R are also depicted in solid lines. InFIG.1(B), the rear wheel support80, rear wheels12R,12L, and connector rod75(described later) which are hidden by the main body20are depicted in solid lines. InFIG.1(A)-FIG.1(C), the link mechanism30is depicted simply. The first support portion82(FIG.2) includes a plate-like section which extends parallel to the right direction DR on the upward direction DU side of the rear wheels12R,12L. The second support portion83(FIG.1(A),FIG.1(B)) is located on the front direction DF side of the link mechanism30between the left rear wheel12L and the right rear wheel12R. The right rear wheel12R (FIG.1(B),FIG.2) is connected to a right drive motor51R. The right drive motor51R is an electric motor, and is secured to a right section of the rear wheel support80. A rotational axis Axw2(FIG.2) of the right drive motor51R is the same as that of the right rear wheel12R. The configurations of the left rear wheel12L and the left drive motor51L are similar to those of the right rear wheel12R and the right drive motor51R, respectively. These drive motors51L,51R are in-wheel motors which directly drive the rear wheels12R,12L. Hereinafter, the left drive motor51L and the right drive motor51R may be collectively referred to as drive system51S. FIG.1(A)-FIG.1(C),FIG.2show a state where the vehicle body90does not lean but stands upright on the horizontal ground GL (that is, a state where a roll angle Ar described later is equal to zero). Hereinafter, this state is referred to as upright state. In this upright state, a rotational axis Axw3(FIG.2) of the left rear wheel12L and the rotational axis Axw2of the right rear wheel12R are located on the same line, and are parallel to the right direction DR. The link mechanism30(FIG.2) is a so-called parallel linkage. The link mechanism30has three longitudinal link members33L,21,33R arranged in order toward the right direction DR, and two lateral link members31U,31D arranged in order toward the downward direction DD. When the vehicle body90stands upright without leaning on the horizontal ground GL, the longitudinal link members33L,21,33R are parallel to the vertical direction, and the lateral link members31U,31D are parallel to the horizontal direction. The two longitudinal link members33L,33R, and the two lateral link members31U,31D form a parallelogram link mechanism. The center longitudinal link member21couples the centers of the lateral link members31U,31D. These link members33L,33R,31U,31D,21are mutually coupled rotatably. In this embodiment, their rotational axes are parallel to the front direction DF. The link members coupled with each other may relatively rotate about the rotational axis within a predetermined angular range (e.g. a range of less than 180 degrees). The left drive motor51L is attached to the left longitudinal link member33L. The right drive motor51R is attached to the right longitudinal link member33R. On the top of the center longitudinal link member21, the first support portion82and second support portion83(FIG.1(A)) are secured. The link members33L,21,33R,31U,31D, and the support portions82,83are made of metal, for example. In this embodiment, the link mechanism30has bearings for rotatably coupling link members. For example, a bearing38rotatably couples the lower lateral link member31D to the center longitudinal link member21, and a bearing39rotatably couples the upper lateral link member31U to the center longitudinal link member21. A plurality of other link members are also coupled by bearings although they are not specifically described here. The lean motor25, which is an example lean actuator configured to actuate the link mechanism30, is an electric motor in this embodiment. The lean motor25is coupled to the center longitudinal link member21and to the upper lateral link member31U. The rotational axis of the lean motor25is the same as that of the bearing39, and is located at the center of the vehicle10in its width direction. The lean motor25rotates the upper lateral link member31U relative to the center longitudinal link member21. This causes the vehicle10to lean in its width direction (i.e. to right direction or to left direction). Such a leaning motion is also referred to as roll motion. FIG.3(A),FIG.3(B)show schematic diagrams of the states of the vehicle10on the horizontal ground GL. These figures show simplified rear views of the vehicle10.FIG.3(A)shows the state in which the vehicle10stands upright whileFIG.3(B)shows the state in which the vehicle10leans. As shown inFIG.3(A), when the upper lateral link member31U is perpendicular to the center longitudinal link member21, all of the wheels12F,12R,12L stand upright relative to the horizontal ground GL. Also, the whole vehicle10including the vehicle body90stands upright relative to the ground GL. A vehicle body upward direction DVU in the figure represents the upward direction of the vehicle body90. With the vehicle10not leaning, the vehicle body upward direction DVU is the same as the upward direction DU. In this embodiment, an upward direction predetermined for the vehicle body90is used as the vehicle body upward direction DVU. As shown in the rear view ofFIG.3(B), the center longitudinal link member21rotates clockwise relative to the upper lateral link member31U, and thereby the right rear wheel12R moves toward the vehicle body upward direction DVU side while the left rear wheel12L moves toward the opposite side, relative to the vehicle body90. As a result, these wheels12F,12R,12L lean to the right direction DR side relative to the ground GL while all of the wheels12F,12R,12L have contact with the ground GL. Also, the whole vehicle10including the vehicle body90leans to the right direction DR side relative to the ground GL. The center longitudinal link member21rotates counterclockwise relative to the upper lateral link member31U, and thereby the vehicle10leans to the left direction DL side although this is not illustrated. In this manner, when the upper lateral link member31U is tilted relative to the center longitudinal link member21, one of the right rear wheel12R or left rear wheel12L moves to the vehicle body upward direction DVU side relative to the vehicle body90while the other moves in an opposite direction side to the vehicle body upward direction DVU relative to the vehicle body90. The link mechanism30can change the relative position between the left rear wheel12L and the right rear wheel12R in the vehicle body upward direction DVU. As a result, the vehicle body90leans relative to the ground GL. It should be noted that the lateral link members31U,31D are rotatably supported on the vehicle body90(via the center longitudinal link member21, the first support portion82, and a suspension system70described later). And, the rear wheels12R,12L are connected to the vehicle body90via a plurality of members including the lateral link members31U,31D. Accordingly, the distances between the rear wheels12R,12L and the vehicle body90in the vehicle body upward direction DVU are changed by rotating the lateral link members31U,31D relative to the vehicle body90. The rotational axes (bearings39,38) of the lateral link members31U,31D are located between the right rear wheel12R and the left rear wheel12L. Accordingly, when the lateral link members31U,31D rotate, the direction of movement of the right rear wheel12R is opposite to that of the left rear wheel12L. InFIG.3(B), the vehicle body upward direction DVU is tilted in the right direction DR side relative to the upward direction DU. Hereinafter, when the vehicle10is viewed in the front direction DF, the angle between the upward direction DU and the vehicle body upward direction DVU is referred to as roll angle Ar or lean angle Ar. Where “Ar>0” indicates a lean to the right direction DR side while “Ar<0” indicates a lean to the left direction DL side. When the vehicle10leans, the whole vehicle10including the vehicle body90leans to substantially the same direction. Therefore, the roll angle Ar of the vehicle body90can be considered as the roll angle Ar of the vehicle10. A control angle Ac of the link mechanism30is also shown inFIG.3(B). The control angle Ac represents an angle between the orientations of the upper lateral link member31U and center longitudinal link member21. “Ac=0” indicates that the center longitudinal link member21is perpendicular to the upper lateral link member31U. “Ac>0” indicates that the center longitudinal link member21is tilted clockwise relative to the upper lateral link member31U, as shown in the rear view ofFIG.3(B). “Ac<0” indicates that the center longitudinal link member21is tilted counterclockwise relative to the upper lateral link member31U although this state is not illustrated. As shown, the control angle Ac is approximately the same as the roll angle Ar when the vehicle10is located on the horizontal ground GL (i.e. the ground GL perpendicular to the vertically upward direction DU). InFIG.3(A),FIG.3(B), an axis AxL on the ground GL is a lean axis AxL. The link mechanism30and the lean motor25can cause the vehicle10to lean to right and left about the lean axis AxL. Hereinafter, the lean axis AxL may be referred to as roll axis. In this embodiment, the roll axis AxL is a straight line which passes through a contact center P1between the front wheel12F and the ground GL, and which is parallel to the front direction DF. The link mechanism30is an example lean device configured to lean the vehicle body90in the width direction of the vehicle10(sometimes referred to as lean device30). FIG.3(C),FIG.3(D)show simplified rear views of the vehicle10similarly toFIG.3(A),FIG.3(B). InFIG.3(C),FIG.3(D), the ground GLx is inclined relative to the vertically upward direction DU (higher on the right side, and lower on the left side).FIG.3(C)shows a state where the control angle Ac is equal to zero. In this state, all of the wheels12F,12R,12L stand upright relative to the ground GLx. And, the vehicle body upward direction DVU is perpendicular to the ground GLx, and is tilted in the left direction DL side relative to the vertically upward direction DU. FIG.3(D)shows a state where the roll angle Ar is equal to zero. In this state, the upper lateral link member31U is approximately parallel to the ground GLx, and is tilted counterclockwise relative to the center longitudinal link member21. The wheels12F,12R,12L are tilted relative to the ground GL. In this manner, the roll angle Ar of the vehicle body90can differ from the control angle Ac of the link mechanism30when the ground GLx is inclined. The rear wheel support80has a lock mechanism (not shown) for locking the link mechanism30. The control angle Ac is fixed by actuating the lock mechanism. For example, the control angle Ac is fixed to zero when the vehicle10is parked. In this embodiment, the main body20is coupled to the rear wheel support80via the suspension system70and the connector rod75, as shown inFIG.1(B),FIG.2. The suspension system70has a left suspension70L and a right suspension70R. The suspensions70L,70R each are coupled to the support portion20D of the main body20and to the first support portion82of the rear wheel support80. The suspensions70L,70R have coil springs71L,71R and shock absorbers72L,72R, respectively, and are telescopic. The suspension system70allows relative movement between the rear wheel support80and the main body20. The connector rod75is a rod which extends in the front direction DF as shown inFIG.1(A),FIG.1(B). The connector rod75is located at the center of the vehicle10in its width direction. The end of the connector rod75on the front direction DF side is rotatably coupled to the rear wall portion20cof the main body20(e.g. via a ball-and-socket joint). The end of the connector rod75on the back direction DB side is rotatably coupled to the second support portion83of the rear wheel support80(e.g. via a ball-and-socket joint). FIG.4shows an explanatory diagram illustrating a balance of forces during turning. This figure shows a rear view of the rear wheels12R,12L when the turning direction is the right direction. As described later, when the turning direction is the right direction, the controller100(FIG.1(A)) may control the steering motor65and the lean motor25so that the rear wheels12R,12L (and thus the vehicle body90) lean to the right direction DR relative to the ground GL. A gravity center90cis shown inFIG.4. The gravity center90cis a gravity center of the vehicle body90. The gravity center90cof the vehicle body90is a gravity center when the vehicle body90carries an occupant (and possibly a load). A first force F1in the figure is a centrifugal force acting on the vehicle body90. A second force F2is a gravity acting on the vehicle body90. Hereinafter, assume that the force acting on the vehicle body90acts on the gravity center90cof the vehicle body90. Here, the mass of the vehicle body90is M (kg), the acceleration of gravity is g (about 9.8 m/s2), the roll angle of the vehicle10relative to the vertical direction is Ar (degrees), the velocity of the vehicle10(i.e. vehicle velocity) during turning is V (m/s), and the turning radius is R (m). The first force F1and the second force F2are expressed in Equations 1 and 2, respectively: F1=(M*V2)/R(Equation 1) F2=M*g(Equation 2) Where * represents a multiplication sign (the same applies below). In addition, a force F1bin the figure is a component of the first force F1in a direction perpendicular to the vehicle body upward direction DVU. A force F2bis a component of the second force F2in a direction perpendicular to the vehicle body upward direction DVU. The force F1band the force F2bare expressed in Equations 3 and 4, respectively: F1b=F1*cos(Ar) (Equation 3) F2b=F2*sin(Ar) (Equation 4) Where “cos( )” is a cosine function, and “sin( )” is a sine function (the same applies below). The force F1bis a component which causes the vehicle body upward direction DVU to be rotated to the left direction DL side while the force F2bis a component which causes the vehicle body upward direction DVU to be rotated to the right direction DR side. When the vehicle10continues to turn with the roll angle Ar (and furthermore the velocity V and turning radius R) maintained, the relationship between F1band F2bis expressed in the following equation 5: F1b=F2b(Equation 5) By substituting Equations 1-4 as discussed above into Equation 5, the turning radius R is expressed in Equation 6: R=V2/(g*tan(Ar)) (Equation 6) Where “tan( )” is a tangent function (the same applies below). Equation 6 is true independently of the mass M of the vehicle body90. Equation 6a below, which is obtained by substituting “Ar” in Equation 6 with a parameter Ara (in this case, absolute value of roll angle Ar) representing the magnitude of the roll angle Ar without distinction between the right and left directions, is true regardless of the lean direction of the vehicle body90: R=V2/(g*tan(Ara)) (Equation 6a) FIG.5is an explanatory diagram showing a simplified relationship between the wheel angle Aw and the turning radius R. This figure shows the wheels12F,12R,12L viewed in the downward direction DD. For ease of explanation, assume that the roll angle Ar is equal to zero (i.e. The vehicle body upward direction DVU is parallel to the downward direction DD). In the figure, the traveling direction D12of the front wheel12F turns to the right direction DR, and thus the vehicle10turns to the right direction DR. A front center Cf in the figure is the contact center P1(FIG.1(C)) of the front wheel12F. The front center Cf is located on a line including the rotational axis Axw1of the front wheel12F when the vehicle10is viewed in the downward direction DD. A rear center Cb is a center between the contact centers PbR, PbL (FIG.1(C)) of the two rear wheels12R,12L. The rear center Cb is located at the middle between the rear wheels12R,12L on a line including the rotational axes Axw2, Axw3of the rear wheels12R,12L when the vehicle10standing upright is viewed in the downward direction DD. A center Cr located on the right direction DR side of the vehicle10is a turning center. The turning motion of the vehicle10includes revolution of the vehicle10and rotation of the vehicle10. The center Cr is a center of revolution (sometimes referred to as revolution center Cr). It should be noted that the front wheel12F is a turn wheel instead of the rear wheels12R,12L in this embodiment. Accordingly, the rotation center is approximately the same as the rear center Cb. A wheelbase Lh is the distance between the front center Cf and the rear center Cb in the front direction DF. As shown inFIG.1(A), the wheelbase Lh is the same as the distance between the rotational axis Axw1of the front wheel12F and the rotational axes Axw2, Axw3of the rear wheels12R,12L in the front direction DF. As shown inFIG.5, the front center Cf, rear center Cb, and revolution center Cr form a right angled triangle. The internal angle of the vertex Cb is 90 degrees. The internal angle of the vertex Cr is equal to the wheel angle Aw. Therefore, the relationship between the wheel angle Aw and the turning radius R is expressed in Equation 7: Aw=arctan(Lh/R) (Equation 7) Where “arctan( )” is an inverse function of tangent function (the same applies below). Equation 6, Equation 6a, and Equation 7 described above are true when the vehicle10is turning while the velocity V and the turning radius R remain unchanged. Specifically, Equation 6, Equation 6a, and Equation 7 represent a static state where the force F1b(FIG.4) due to centrifugal force and the force F2bdue to gravity are in equilibrium. Equation 7 can be used as a good approximate equation which represents the relationship between the wheel angle Aw and the turning radius R. It should be noted that there are a variety of differences between the actual behavior of the vehicle10and the simplified behavior inFIG.5. For example, the actual force which acts on the vehicle changes dynamically. By controlling the vehicle10taking the dynamic change in the force into account, the difference can be reduced between the intended movement of the vehicle10by the control and the actual movement of the vehicle10. In this embodiment, the controller100controls the vehicle10taking a roll torque acting on the vehicle body90into account. The roll torque will be described below. FIG.6(A)-FIG.6(C)are explanatory diagrams of roll torque due to yaw angular acceleration of the vehicle10.FIG.6(A),FIG.6(C)are explanatory diagrams of the rear wheels12R,12L, and the gravity center90cwhen viewed in the front direction DF. Here, the vehicle10is located on a horizontal ground GL.FIG.6(A)shows a upright state (Ar=0).FIG.6(C)shows a state where the vehicle body90leans to the right direction DR (Ar>0).FIG.6(B)is an explanatory diagram of the wheels12F,12R,12L, and the gravity center90cwhen viewed in a direction opposite to the vehicle body upward direction DVU. InFIG.6(B), the right direction DR and the left direction DL are shown for reference. When the roll angle Ar is not equal to zero, these directions DR, DL are not perpendicular to the vehicle body upward direction DVU, but are tilted relative to the direction. A variable Z (FIG.6(A),FIG.6(C)) is a distance between the roll axis AxL and the gravity center90cof the vehicle body90. In this embodiment, the roll axis AxL is located on the ground GL. Accordingly, the distance Z is the same as a distance in the vertically upward direction DU between the ground GL and the gravity center90cin the upright state (FIG.6(A)). In the upright state, when the gravity center90cis projected toward the vertically downward direction DD onto the ground, its projected point PcL is located on the roll axis AxL. A vertical axis Ux is an axis that passes through the projected point PcL and that is parallel to the vertically upward direction DU. A vehicle upward axis VUx is an axis that passes through the projected point PcL and that is parallel to the vehicle body upward direction DVU. The vehicle upward axis VUx passes through the projected point PcL and the gravity center90c. As shown inFIG.6(C), an angle between the vehicle upward axis VUx and the vertical axis Ux is the roll angle Ar. InFIG.6(B), a rotation center Rc is shown. In this embodiment, the front wheel12F is a turn wheel instead of the rear wheels12R,12L. The direction of the moving vehicle10(e.g. the front direction DF) changes toward right or left about the proximity of the rear wheels12R,12L. When the wheels12F,12R,12L does not slip relative to the ground, the rotation center Rc can be located on the center (specifically, the rear center Cb inFIG.5) between the rear wheels12R,12L. When the wheels12F,12R,12L slip relative to the ground, the rotation center Rc can be displaced from the rear center Cb. In any event, the rotation center Rc is located in the proximity of the center between the rear wheels12R,12L. Typically, the gravity center90cof the vehicle body90is located close to the central portion of the vehicle body90in the top view ofFIG.6(B). Accordingly, the gravity center90cof the vehicle body90is located away from the rotation center Rc toward the front direction DF side. A distance X in this figure represents a positional difference (distance) in the front direction DF between the gravity center90cand the rotation center Rc. A variable Ay″ (FIG.6(B)) is a yaw angular acceleration of the vehicle10(the variable Ay represents a yaw angle). In this specification, a single quotation mark [′] attached to a variable indicates a first derivative with regard to time. A double quotation mark [″] indicates a second derivative with regard to time. For example, Ay″ represents a second derivative of a yaw angle with regard to time (i.e. yaw angular acceleration). In this embodiment, the yaw angular acceleration Ay″ is a yaw angular acceleration about an axis parallel to the vehicle body upward direction DVU. The yaw angular acceleration Ay″ represents an angular acceleration of rotation of the vehicle10about the rotation center Rc. Here, an axis perpendicular to the ground is referred to as ground perpendicular axis. The yaw angular acceleration Ay″ represents a component about the axis parallel to the vehicle body upward direction DVU in the yaw angular acceleration about the ground perpendicular axis. In the top view ofFIG.6(B), when the direction of the yaw angular acceleration Ay″ is clockwise, the yaw angular velocity Ay′ changes so that the right turn increases in degree. Hereinafter, in the top view, when the direction of the yaw angular acceleration Ay″ is clockwise, the direction of the yaw angular acceleration Ay″ will be referred to as right direction. In the top view, when the direction of the yaw angular acceleration Ay″ is counterclockwise, the direction of the yaw angular acceleration Ay″ will referred to as left direction. The gravity center90cof the vehicle body90is located away from the rotation center Rc by the distance X toward the front direction DF side. Accordingly, the vehicle body90is subject to a component F12of inertial force in a direction opposite to that of the yaw angular acceleration Ay″ (referred to as inertial force component F12). The direction of this inertial force component F12is perpendicular to the vehicle body upward direction DVU. Also, in this embodiment, the direction from the rotation center Rc to the gravity center90cis approximately parallel to the front direction DF in the top view ofFIG.6(B). Accordingly, the direction of the inertial force component F12is approximately perpendicular to the front direction DF. The magnitude of the inertial force component F12is represented by a product of the mass M and an acceleration A90of the gravity center90cdue to the yaw angular acceleration Ay″. The acceleration A90is represented by a product of the distance X and the yaw angular acceleration Ay″. Accordingly, the magnitude of the inertial force component F12is calculated by a formula [M*X*Ay″]. In the top view ofFIG.6(B), the direction of the yaw angular acceleration Ay″, i.e. the direction of change in the yaw angular velocity Ay′, is clockwise. In this case, the direction of the inertial force component F12faces the left direction DL side. InFIG.6(C), the inertial force component12is shown. The inertial force component F12causes the vehicle body90to roll. The magnitude of a roll torque Tq1due to the inertial force component F12is calculated by multiplying the distance Z by the magnitude of the inertial force component F12(Tq1=Z*F12=M*X*Z*Ay″). The direction of the roll torque Tq1(referred to as yaw angular acceleration roll direction) is right direction or left direction, and is opposite to the direction of the yaw angular acceleration Ay″. For example, when the direction of the yaw angular acceleration Ay″ is the direction of right turn, the direction of the roll torque Tq1is the left direction. When the front wheel12F turns, the wheel angle Aw changes. When the wheel angle Aw changes, the yaw angular velocity Ay′ changes, and therefore the magnitude of the yaw angular acceleration Ay″ is larger than zero. Due to non-zero yaw angular acceleration Ay″, the roll torque Tq1acts on the vehicle body90. In this manner, due to the change in wheel angle Aw (the angular velocity Aw′ of the wheel angle Aw), the roll torque is generated (hereinafter sometimes referred to as first type roll torque). The magnitude of the first type roll torque can be determined as follows. First, a relationship between the wheel angle Aw and the yaw angular acceleration Ay″ will be described. As described with reference toFIG.5, the front center Cf, rear center Cb, and revolution center Cr form a right angled triangle. When the roll angle Ar is equal to zero, the vehicle body upward direction DVU is parallel to the vertically downward direction DD. Accordingly, the positions of the points Cf, Cb, Cr shown inFIG.5are the same as positions when the points Cf, Cb, Cr are viewed in a direction parallel to the vehicle body upward direction DVU. The traveling direction D12of the front wheel12F is assumed to be mapped to the wheel angle Aw regardless of the roll angle Ar. Accordingly, when the points Cf, Cb, Cr are viewed in the direction parallel to the vehicle body upward direction DVU, the front center Cf, rear center Cb, and revolution center Cr form a right angled triangle regardless of the roll angle Ar. Among three sides of this right angled triangle, the length of the side which connects the rotation center Cr and the rear center Cb is denoted as Rx. In this case, Equation A1 is true. tan(Aw)=Lh/Rx(Equation A1) Equation A1 is transformed to Equation A2. 1/Rx=tan(Aw)/Lh(Equation A2) When the vehicle10is turning with the yaw angular velocity Ay′, Equation A3 is true. V=Rx*Ay′(Equation A3) Equation A3 is transformed to Equation A4. Ay′=V/Rx(Equation A4) By substituting Equation A2 into Equation A4, Equation A5 is derived. Ay′=(V*tan(Aw))/Lh(Equation A5) By differentiating both sides of Equation A5 with respect to time, Equation A6 is derived. Ay″=(V/Lh)*(1/cos2(Aw))*Aw′(Equation A6) As described with reference toFIG.6(B),FIG.6(C), due to the yaw angular acceleration Ay″, the roll torque acts on the vehicle body90. The first type roll torque is a roll torque due to the yaw angular acceleration Ay″ in Equation A6. The magnitude of the first type roll torque Tqa is derived by substituting Equation A6 into the yaw angular acceleration Ay″ of the formula for the magnitude of roll torque Tq1shown inFIG.6(C), and is expressed by Equation A7. Tqa=M*X*Z*Ay″=(M*X*Z*V*Aw′)/(Lh*cos2(Aw))(EquationA7) As described above, the angular velocity Aw′ of the wheel angle Aw can be used to apply the first type roll torque Tqa on the vehicle body90. The direction of the first type roll torque Tqa (sometimes referred to as turn roll direction) is opposite to that of the angular velocity Aw′ of the wheel angle Aw. For example, when the wheel angle Aw turns in the right direction DR (Aw′>0), the direction of the first type roll torque Tqa is the left direction. In addition, Equation A8 is derived from Equation A7. Aw′=(Tqa*Lh*cos2(Aw))/(M*X*Z*V) (Equation A8) Equation A8 represents the magnitude of the angular velocity Aw′ of the wheel angle Aw required to generate the first type roll torque Tqa. It should be noted that the steering motor65can change the wheel angle Aw and thus its angular velocity Aw′ by generating the turning torque. As indicated in Equation A6, the angular velocity Aw′ of the wheel angle Aw changes the yaw angular acceleration Ay″ of the vehicle10. In this manner, the turning torque is an example force which changes the yaw angular acceleration Ay″. The steering motor65is an example force generator configured to generate a force which changes the yaw angular acceleration Ay″ (sometimes referred to as force generator65). FIG.7is a block diagram showing the configuration relating to control of the vehicle10. The vehicle10has a vehicle velocity sensor122, an input angle sensor123, a wheel angle sensor124, a direction sensor126, an accelerator pedal sensor145, a brake pedal sensor146, a controller100, a right drive motor51R, a left drive motor51L, a lean motor25, and a steering motor65. The vehicle velocity sensor122is a sensor for detecting a vehicle velocity of the vehicle10. In this embodiment, the vehicle velocity sensor122is attached on the lower end of the front fork17(FIG.1(A)) to detect a rotational rate of the front wheel12F. The rotational rate is correlated with the velocity (sometimes referred to as vehicle velocity) of the vehicle10. Accordingly, the sensor122for detecting the rotational rate can be considered to detect the vehicle velocity. The input angle sensor123is a sensor for detecting an orientation of the steering wheel41a(i.e. input angle). In this embodiment, the input angle sensor123is attached to the steering wheel41a(FIG.1(A)). The input angle sensor123is an example turn input information acquisition device configured to acquire an input angle AI (an example turn input information). The wheel angle sensor124is a sensor for detecting a wheel angle of the front wheel12F. In this embodiment, the wheel angle sensor124is attached to the front wall portion20aof the main body20(FIG.1(A)). The wheel angle sensor124detects the wheel angle about the turning axis Ax1(sometimes referred to as detected angle Awx). The turning axis Ax1rolls with the vehicle body90. In addition, a direction parallel to the turning axis Ax1(sometimes referred to as direction of turning axis Ax1) can differ from the vehicle body upward direction DVU. In this case, the wheel angle Aw about an axis parallel to the vehicle body upward direction DVU is calculated by correcting the detected angle Awx using a difference between the direction of the turning axis Ax1and the vehicle body upward direction DVU. For example, if the caster angle CA relative to the vehicle body upward direction DVU is not equal to zero, the wheel angle Aw may be calculated according to an approximate equation ‘Aw=cos(CA)*Awx.’ The same is true if a camber angle relative to the vehicle body upward direction DVU is not equal to zero. The direction sensor126determines the roll angle Ar and the yaw angular velocity. In this embodiment, the direction sensor126is secured to the vehicle body90(FIG.1(A)(specifically, to the rear wall portion20c). In this embodiment, the direction sensor126also includes an acceleration sensor126a, a gyroscope sensor126g, and a control unit126c. The acceleration sensor is a sensor that detects acceleration in any direction, for example, triaxial acceleration sensor. Hereinafter, a direction of acceleration detected by the acceleration sensor126awill be referred to as detected direction. With the vehicle10stopped, the detected direction is the same as the vertically downward direction DD. The gyroscope sensor126gis a sensor that detects angular velocity about a rotational axis in any direction, for example, triaxial angular velocity sensor. The control unit126cuses a signal from the acceleration sensor126a, a signal from the gyroscope sensor126g, and a signal from vehicle velocity sensor122to determine the roll angle Ar and the yaw angular velocity. For example, the control unit126cis a data processor including a computer. The control unit126cuses the velocity V detected by the vehicle velocity sensor122to calculate the acceleration of the vehicle10. Then, the control unit126cuses the acceleration to determine the deviation of the detected direction from the actual vertically downward direction DD due to the acceleration of the vehicle10(e.g. the deviation of the detected direction toward the front direction DF or back direction DB is determined). In addition, the control unit126cuses the angular velocity detected by the gyroscope sensor126gto determine the deviation of the detected direction from the actual vertically downward direction DD due to the angular velocity of the vehicle10(e.g. the deviation of the detected direction toward the right direction DR or left direction DL is determined). The control unit126cuses the determined deviations to modify the detected direction, and thereby determines the vertically downward direction DD. In this manner, the direction sensor126can determine the vertically downward direction DD properly under a variety of driving conditions of the vehicle10. The control unit126cthen determines the vertically upward direction DU opposite to the vertically downward direction DD, and calculates the roll angle Ar between the vertically upward direction DU and the predetermined vehicle body upward direction DVU. In addition, the control unit126cdetermines a component of angular velocity about the axis parallel to the vehicle body upward direction DVU from the angular velocity determined by the gyroscope sensor126gto identify the determined angular velocity as the yaw angular velocity. The accelerator pedal sensor145is attached to the accelerator pedal45(FIG.1(A)) in order to detect an accelerator operation amount. The brake pedal sensor146is attached to the brake pedal46(FIG.1(A)) in order to detect a brake operation amount. Each sensor122,123,124,145,146is configured using a resolver or encoder, for example. The controller100has a main control unit110, a drive device control unit300, a lean motor control unit400, and a steering motor control unit500. The controller100operates with electric power from the battery120(FIG.1(A)). In this embodiment, the control units110,300,400,500each has a computer. More specifically, the control units110,300,400,500have processors110p,300p,400p,500p(e.g. CPU), volatile memories110v,300v,400v,500v(e.g. DRAM), and non-volatile memories110n,300n,400n,500n(e.g. flash memory), respectively. The non-volatile memories110n,300n,400n,500nstore in advance programs110g,300g,400g,500gfor operating the corresponding control units110,300,400,500, respectively. In addition, the non-volatile memory110nof the main control unit110stores in advance map data MAr, MCw. The processors110p,300p,400p,500pperform a variety of processes by executing the corresponding programs110g,300g,400g,500g, respectively. The processor110pof the main control unit110receives signals from the sensors122,123,124,126,145,146. The processor110pthen uses the received signals to output instructions to the drive device control unit300, the lean motor control unit400, and the steering motor control unit500. The processor300pof the drive device control unit300controls the drive motors51L,51R according to the instruction from the main control unit110. The processor400pof the lean motor control unit400controls the lean motor25according to the instruction from the main control unit110. The processor500pof the steering motor control unit500controls the steering motor65according to the instruction from the main control unit110. These control units300,400,500respectively have electric power control modules300c,400c,500cwhich supply the motors51L,51R,25,65under control with electric power from the battery120. The electric power control modules300c,400c,500care configured using an electric circuit (e.g. inverter circuit). It should be noted that a portion of the main control unit110which performs processing for controlling the steering motor65, and the steering motor control unit500as a whole is an example force controller configured to control the force generator65(sometimes referred to as force controller910). A2. Control of Steering Motor: FIG.8is a flowchart showing an example control process of the steering motor65. In this embodiment, the steering motor65is controlled so that a change in the wheel angle Aw results in a roll torque which makes the roll angle Ar close to a target roll angle. In flowcharts, each step is labeled with a reference of an alphabet “S” followed by a numeral.FIG.8illustrates the process when the vehicle10is moving forward. As described later, a variety of parameters are used in the control process. It should be noted that the mass M of the vehicle body90, the acceleration of gravity g, the distance X, the distance Z, and the wheelbase Lh can each be measured experimentally. In this embodiment, predetermined values (sometimes referred to as reference values M, g, X, Z, Lh) are used as the respective parameters M, g, X, Z, Lh. It should be noted that the mass M of the vehicle body90corresponds to a so-called sprung mass. In S210, the processor110pof the main control unit110(FIG.7) acquires data from the sensors122,123,124,126,145,146. The processor110pthen determines current information, in particular, the velocity V, input angle AI, wheel angle Aw, roll angle Ar, yaw angular velocity Ay′, accelerator operation amount Pa, brake operation amount Pb. In S220, the processor110puses the input angle AI to determine a target roll angle Art. A correspondence relationship between the input angle AI and the target roll angle Art is predetermined by map data MAr (FIG.7). The processor110preferences the map data MAr to identify the target roll angle Art. In this embodiment, the larger the absolute value of the input angle AI is, the larger the absolute value of the target roll angle Art is. The direction (right or left) of the target roll angle Art is the same as the turning direction determined by the input angle AI. In S230, the processor110pcalculates a roll angle difference dAr by subtracting the current roll angle Ar from the target roll angle Art. In S240, the processor110pdetermines a control parameter. In this embodiment, the processor110pdetermines a P gain Gp1for proportional control (sometimes referred to as first gain Gp1). It should be noted that the processor110pperforms S220-S230and S240in parallel. In S250, the processor110pthen determines an intermediate control value Ctq through the proportional control using the roll angle difference dAr and the first gain Gp1(e.g. Ctq=Gp1*dAr). As described later, the intermediate control value Ctq indicates a reference roll torque. The zero intermediate control value Ctq indicates a roll torque of zero. The positive intermediate control value Ctq indicates a roll torque in the right direction DR. The negative intermediate control value Ctq indicates a roll torque in the left direction DL. The larger the absolute value of the intermediate control value Ctq is, the larger the absolute value of the roll torque is. The steering motor65is controlled so that the first type roll torque generated due to the angular velocity Aw′ of the wheel angle Aw is made close to the reference roll torque. It should be noted that the larger the magnitude of the roll angle difference dAr is, the larger the magnitude of the intermediate control value Ctq (i.e. the reference roll torque) is. In addition, the larger the first gain Gp1is, the larger the magnitude of the reference roll torque is. FIG.9(A)-FIG.9(C)are graphs showing examples of the first gain Gp1. InFIG.9(A), the horizontal axis represents the velocity V, and the vertical axis represents the first gain Gp1. When the velocity V is within a first range VR1between zero and a first threshold V1, inclusive, the smaller the velocity V is, the smaller the first gain Gp1is (e.g. the first threshold V1is a value within a range between 1 km/hour and 5 km/hour, inclusive). And, if V=0, Gp=0 (i.e. Ctq=0). The reason is as follows. As described later, in this embodiment, the steering motor65is controlled so that the angular velocity Aw′ of the wheel angle Aw is made close to the value calculated according to Equation A8 described above. As indicated in Equation A8, the absolute value of the angular velocity Aw′ of the wheel angle Aw is inversely proportional to the velocity V. If the angular velocity Aw′ strictly follows Equation A8, the absolute value of the angular velocity Aw′ diverges as the velocity V approaches zero. In this embodiment, in order to prevent the parameter from diverging when the velocity V is smaller, the first gain Gp1is smaller when the velocity V is smaller. This results in the smaller intermediate control value Ctq (i.e. reference roll torque), and therefore the divergence of the angular velocity Aw′ is suppressed. When the velocity V is within a second range VR2of not smaller than a second threshold V2, the larger the velocity V is, the smaller the first gain Gp1is (e.g. the second threshold V2is a value within a range between 30 km/hour and 40 km/hour, inclusive). The reason is as follows. When a rotating object is subject to an external torque about an axis perpendicular to a rotational axis, a torque about an axis perpendicular to the rotational axis and to an axis of the external torque acts on the object (sometimes referred to as gyroscopic moment). The object then rotates due to the gyroscopic moment. Such a movement is also referred to as precession movement. For example, when the vehicle body90leans to the right direction DR while the vehicle10(FIG.1(A)) is traveling forward, the front wheel12F rotating about the rotational axis Axw1also leans to the right direction DR along with the vehicle body90. In this manner, the front wheel12F is subject to a torque about an axis perpendicular to the rotational axis Axw1and parallel to the front direction DF. In this case, the front wheel12F (FIG.1(B)) is subject to a torque that turns the traveling direction D12about the turning axis Ax1to the right direction DR. The front wheel12F turns to the right direction DR. The torque which turns the front wheel12F increases with an increase in the angular momentum of the front wheel12F, i.e. an increase in the velocity V. In this manner, when the velocity V is larger, the front wheel12F can spontaneously turn to the lean direction of the vehicle body90. In this embodiment, when the velocity V is larger, the first gain Gp1is smaller in order to allow for the spontaneous turn of the front wheel12F. As described later, when the first gain Gp1is smaller, the magnitude of the intermediate control value Ctq (i.e. reference roll torque) is smaller, and therefore the magnitude of the turning torque of the steering motor65is also smaller. Thereby, the spontaneous turn of the front wheel12F is allowed. It should be noted that when the velocity V is constant, the first gain Gp1can change depending on the angular velocity AI′ and angular acceleration AI″ of the input angle AI.FIG.9(B)is a graph when the velocity V is constant, where the horizontal axis represents the absolute value of the angular velocity AI′ of the input angle AI, and the vertical axis represents the first gain Gp1. As shown, the larger the absolute value of the angular velocity AI′ is, the larger the first gain Gp1is.FIG.9(C)is a graph when the velocity V is constant, where the horizontal axis represents the absolute value of the angular acceleration AI″ of the input angle AI, and the vertical axis represents the first gain Gp1. As shown, the larger the absolute value of the angular acceleration AI″ is, the larger the first gain Gp1is. The reason is as follows. The driver turns the steering wheel41aquickly in order to change quickly the traveling direction of the vehicle10. Accordingly, the roll angle Ar is required to change quickly when the absolute value of the angular velocity AI′ is larger or when the absolute value of the angular acceleration AI″ is larger. Therefore, in this embodiment, in order to make the absolute value of the intermediate control value Ctq (i.e. reference roll torque) larger, the larger the absolute value of the angular velocity AI′ is, the larger the first gain Gp1is, and the larger the absolute value of the angular acceleration AI″ is, the larger the first gain Gp1is. It should be noted that in order to suppress any excess increase in the intermediate control value Ctq, the processor110psets a second upper limit Lm2to the first gain Gp1. It should be noted that the correspondence between the first gain Gp1and the other parameters may be any of a variety of other correspondences instead of the correspondences shown inFIG.9(A)-FIG.9(C). For example, in the second range VR2(FIG.9(A)), when the velocity V increases, the first gain Gp1may remain without any reduction, or may increase. In addition, the range of the velocity V of not smaller than the first threshold V1may be divided into three ranges of low velocity range, medium velocity range, and high velocity range. And, the first gain Gp1of the low velocity range and the first gain Gp1of the high velocity range may be set to a larger value as compared to the first gain Gp1of the medium velocity range. The larger first gain Gp1of the low velocity range can assist in the front wheel12F turning to the turning direction when the gyroscopic moment is smaller. On the other hand, when the velocity V is larger, the rotational rate of the front wheel12F is larger, and thus the angular momentum of the front wheel12F is also larger. In this case, a larger torque may be required to turn the front wheel12F to the turning direction. The larger first gain Gp1of the high velocity range can assist in the front wheel12F turning to the turning direction. FIG.9(D)is a graph showing an example relationship between the roll angle difference dAr and the intermediate control value Ctq. The horizontal axis represents the absolute value of the roll angle difference dAr, and the vertical axis represents the absolute value of the intermediate control value Ctq. This graph shows the case where the velocity V is constant. As shown, the larger the absolute value of the roll angle difference dAr is, the larger the absolute value of the intermediate control value Ctq is. When the absolute of the roll angle difference dAr is constant, the larger the absolute value of the angular velocity AI′ of the input angle AI is, the larger the absolute value of the intermediate control value Ctq is. In addition, the larger the absolute value of the angular acceleration AI″ of the input angle AI is, the larger the absolute value of the intermediate control value Ctq is. In S260(FIG.8), the processor110puses the intermediate control value Ctq to determine the angular velocity of the wheel angle Aw (sometimes referred to as additional angular velocity Awd′) The additional angular velocity Awd′ represents an angular velocity such that the additional angular velocity Awd′ is added to the current angular velocity Aw′ of the wheel angle Aw to generate the reference roll torque mapped to the intermediate control value Ctq. Such a relationship between the additional angular velocity Awd′ and the intermediate control value Ctq is expressed by Equation A8 describe above. In Equation A8, the intermediate control value Ctq is used instead of the first type roll torque Tqa, and the angular velocity Aw′ represents the additional angular velocity Awd′. The processor110puses the reference values Lh, M, X, Z, the intermediate control value Ctq, the wheel angle Aw, and the velocity V to calculate the additional angular velocity Awd′. FIG.9(E)-FIG.9(G)are graphs showing examples of the additional angular velocity Awd′. InFIG.9(E), the horizontal axis represents the absolute value of the intermediate control value Ctq, and the vertical axis represents the absolute value of the additional angular velocity Awd′. As shown, the larger the absolute value of Ctq is, the larger the absolute value of Awd′ is. When the intermediate control value Ctq is constant, the additional angular velocity Awd′ can change depending on the velocity V and the wheel angle Aw. It should be noted that in this embodiment, the processor110psets a first upper limit Lm1to the absolute value of Awd′. InFIG.9(F), the horizontal axis represents the velocity V, and the vertical axis represents the absolute value of the additional angular velocity Awd′. The larger the velocity V is, the smaller the absolute value of Awd′ is. In this embodiment, the absolute value of Awd′ is inversely proportional to V, as also indicated in Equation A8 described above. In order to prevent the additional angular velocity Awd′ from diverging when the velocity V is smaller, the absolute value of Awd′ is limited to the first upper limit Lm1. InFIG.9(G), the horizontal axis represents the absolute value of the wheel angle Aw, and the vertical axis represents the absolute value of the additional angular velocity Awd′. The larger the absolute value of the wheel angle Aw is, the smaller the absolute value of Awd′ is. In this embodiment, as the absolute value of Aw increases, the absolute value of Awd′ decreases according to cos2(Aw), as also indicated in Equation A8 described above. In S270(FIG.8), the processor110puses the additional angular velocity Awd′ to determine an actuation control value Cw (sometimes simply referred to as control value Cw). The control value Cw indicates a turning torque to be generated by the steering motor65. In this embodiment, the control value Cw indicates direction and magnitude of electric current to be supplied to the steering motor65. The absolute value of the control value Cw indicates the magnitude of the electric current (i.e. the magnitude of the turning torque). The positive/negative signs of the control value Cw indicates the direction of the electric current (i.e. the direction of the turning torque) (e.g. the positive sign indicates the right direction while the negative sign indicates the left direction). A correspondence relationship between the additional angular velocity Awd′ and the control value Cw is predetermined by map data MCw (FIG.7). The larger the absolute value of the additional angular velocity Awd′ is, the larger the absolute value of the control value Cw is. In addition, the positive/negative sign of the control value Cw (i.e. the direction of the turning torque) is the same as the direction of the additional angular velocity Awd′. The processor110preferences the map data MCw to identify the actuation control value Cw mapped to the additional angular velocity Awd′. In S280, the processor110pprovides data indicative of the actuation control value Cw to the steering motor control unit500. The processor500pof the steering motor control unit500controls the electric power to be supplied to the steering motor65according to the actuation control value Cw. Specifically, the processor500pprovides the data indicative of the actuation control value Cw to the electric power control module500c. The electric power control module500ccontrols the electric power to be supplied to the steering motor65according to the actuation control value Cw. The steering motor65outputs the turning torque according to the supplied electric power. Then, the process ofFIG.8ends. The controller100repeatedly performs the process ofFIG.8. As such, the controller100continuously controls the steering motor65to output the turning torque appropriate for the state of the vehicle10. As discussed above, the actuation control value Cw indicates the turning torque mapped to the additional angular velocity Awd′ (S270). A parameter Tqa mapped to the additional angular velocity Awd′ according to Equation A8 indicates the first type roll torque Tqa to be generated due to the additional angular velocity Awd′. In S260, the intermediate control value Ctq is used as the parameter Tqa indicative of the first type roll torque to calculate the additional angular velocity Awd′ according to Equation A8. Therefore, the intermediate control value Ctq indicates the first type roll torque. In S250, the intermediate control value Ctq is determined through the proportional control using the roll angle difference dAr and the control parameter (in this case, P gain Gp1). In this embodiment, the magnitude of the intermediate control value Ctq (i.e. the magnitude of the first type roll torque) is increased with an increase in the magnitude of the roll angle difference dAr. In addition, the positive/negative sign of the intermediate control value Ctq (i.e. the direction of the first type roll torque) is the same as the positive/negative sign of the roll angle difference dAr (i.e. roll direction from the roll angle Ar to the target roll angle Art) (hereinafter, the roll direction from the roll angle Ar to the target roll angle Art may be referred to as ‘direction of roll angle difference dAr’). In this manner the roll angle difference dAr indicates the reference roll torque which is a reference of the first type roll torque to be generated due to the additional angular velocity Awd′. The magnitude of the roll angle difference dAr indicates a reference magnitude which is the magnitude of the reference roll torque. The positive/negative sign of the roll angle difference dAr indicates a reference direction which is the direction of the reference roll torque. The roll angle difference dAr is an example of reference information which indicates the reference direction as a reference of direction and the reference magnitude as a reference of magnitude for the first type roll torque to act on the vehicle body90(hereinafter, the roll angle difference dAr may be referred to as reference information dAr). The controller100controls the steering motor65according to the actuation control value Cw to be determined using the reference information dAr. As such, the steering motor65generates the turning torque so that the direction of the first type roll torque is the same as the reference direction, and the magnitude of the first type roll torque increases with an increase in the reference magnitude. If the steering motor65is controlled according to the actuation control value Cw, the roll angle Ar approaches the target roll angle Art, and therefore the vehicle10can travel at the roll angle Ar appropriate for the input angle AI (i.e. with the target roll angle Art). FIG.10(A)-FIG.10(C)are graphs showing examples of the turning torque Tqw controlled in the process ofFIG.8. InFIG.10(A), the horizontal axis represents the absolute value of the actuation control value Cw, and the vertical axis represents the absolute value of the turning torque Tqw. The absolute value of the turning torque Tqw increases with an increase in the absolute value of the actuation control value Cw. It should be noted that in this embodiment, the processor110pmodifies the absolute value of the actuation control value Cw to the upper limit CwM in S280ofFIG.8if the absolute value of the actuation control value Cw is equal to or larger than a predetermined upper limit CwM. Accordingly, the absolute value of the turning torque Tqw is limited to an upper limit Lm3mapped to the upper limit CwM. As a result, the wheel angle Aw is suppressed from changing rapidly. InFIG.10(B), the horizontal axis represents the roll angle difference dAr, and the vertical axis represents the turning torque Tqw. At the origin O, dAr=0, and Tqw=0. In this figure, assume that the velocity V, input angle AI, wheel angle Aw, and yaw angular velocity Ay′ each are constant. Such a condition can be reproduced by placing the vehicle10on a turntable which can rotate the vehicle10about an axis parallel to the vertically upward direction DU. An angular velocity of rotation of the turntable represents a yaw angular velocity about the axis parallel to the vertically upward direction DU. The yaw angular velocity Ay′ about the axis parallel to the vehicle body upward direction DVU can be determined using data from the direction sensor126. The magnitude of the yaw angular velocity Ay′ increases with an increase in the angular velocity of rotation of the turntable. The turntable has a plurality of rollers which rotates the respective wheels12F,12R,12L at the rotational rate according to the velocity V. In order to maintain the wheel angle Aw constant, the front fork17is fixed to the vehicle body90. The turning torque Tqw can be determined using the electric current to be supplied to the steering motor65. The absolute value of the intermediate control value Ctq to be determined in S250(FIG.8) increases with an increase in the absolute value of the roll angle difference dAr. Accordingly, the larger the absolute value of the roll angle difference dAr is, the larger the absolute value of the turning torque Tqw is also (however, the absolute value of the turning torque Tqw is limited to the upper limit Lm3). In addition, the roll angle difference dAr being a positive value indicates that a reference roll direction from the roll angle Ar to the target roll angle Art is rightward. As can be understood fromFIG.6(B),FIG.6(C), when turning the direction D12of the front wheel12F to the right direction DR, the direction of the first type roll torque Tqa is the left direction DL. Accordingly, in order to generate the first type roll torque Tqa of the right direction DR, a negative turning torque Tqw is generated that turns the direction D12of the front wheel12F to the left direction DL. In contrast, when the roll angle difference dAr is a negative value, a positive turning torque Tqw is generated. InFIG.10(C), the horizontal axis represents the absolute value of the wheel angle Aw, and the vertical axis represents the absolute value of the turning torque Tqw. This graph illustrates characteristics under the condition (referred to as first condition) that each of the velocity V and the reference information dAr (i.e. the reference direction and reference magnitude) is maintained constant (the absolute value of dAr is larger than zero). In order to identify the relationship between the wheel angle Aw and the turning torque Tqw, assume that the other parameters (e.g. AI, Ar, Ay′) are constant. The wheel angle Aw is variable. In order to realize such a condition, the vehicle10is placed on the above-mentioned turntable. The roller for supporting the front wheel12F is configured to respond to any turn of the front wheel12F to turn to the same direction. When the steering motor65generates the turning torque, the roller for supporting the front wheel12F turns along with the front wheel12F to the direction of the turning torque. As shown, even if the reference information dAr is constant, the absolute value of the turning torque Tqw decreases as the absolute value of the wheel angle Aw increases. This reason is that the angular velocity Aw′ of the wheel angle Aw (i.e. the turning torque Tqw) decreases according to cos2(Aw) as indicated in Equation A8 described above. In this manner, because the turning torque Tqw is controlled according to Equation A8, the controller100can make the first type roll torque due to the angular velocity Aw′ close to the reference torque. As discussed above, the controller100performs the process ofFIG.8to control the steering motor65so that the roll angle Ar approaches the target roll angle Art. As a result, the vehicle10can travel at the roll angle Ar appropriate for the input angle AI. For example, when the magnitude of the roll angle difference dAr is larger, and the roll direction from the current roll angle Ar to the target roll angle Art is rightward (i.e. the direction of the roll angle difference dAr is rightward), the steering motor65turns the front wheel12F to the left direction, which is opposite to the direction of the roll angle difference dAr. As such, the roll angle Ar quickly approaches the target roll angle Art. The steering motor65then outputs the turning torque through the similar control so that the roll angle Ar is maintained at the roll angle difference dAr. As such, the wheel angle Aw can approach an angle appropriate for the roll angle Ar (FIG.4,FIG.5). When the magnitude of the roll angle difference dAr is smaller, the magnitude of the turning torque is also smaller. As discussed above, due to the gyroscopic moment, the front wheel12F can spontaneously turn to the roll direction of the vehicle body90. Accordingly, the vehicle10can make a turn appropriate for the input angle AI. For example, the vehicle10can make the turn as shown inFIG.4,FIG.5. In addition, as described with reference toFIG.6(B),FIG.6(C), the first type roll torque Tqa obtained using the angular velocity Aw′ of the wheel angle Aw is generated using the inertial force F12in the direction opposite to that of the yaw angular acceleration Ay″. Accordingly, as compared to when the vehicle body90rolls due to the roll torque generated directly by the lean motor25, the lateral acceleration which the driver feels is suppressed when vehicle body90rolls due to the first type roll torque Tqa. A3. Control of Lean Motor: FIG.11is a flowchart showing an example control process of the lean motor25. In this embodiment, the lean motor25is controlled to generate a roll torque which makes the roll angle Ar close to a target roll angle. In S510, the processor110pof the main control unit110(FIG.7) acquires signals from the sensors123,126. The processor110pthen determines current information, in particular, the input angle AI, roll angle Ar. S520, S530is the same as S220, S230inFIG.8, respectively. In S540, the processor110puses the roll angle difference dAr to determine a control value CwL. In this embodiment, the processor110pthen determines the control value CwL through a proportional control using the roll angle difference dAr. In S550, the processor110pprovides data indicative of the control value CwL to the lean motor control unit400. The processor400pof the lean motor control unit400controls the electric power to be supplied to the lean motor25according to the control value CwL. Specifically, the processor400pprovides the data indicative of the control value CwL to the electric power control module400c. The electric power control module400ccontrols the electric power to be supplied to the lean motor25according to the control value CwL. The lean motor25outputs the roll torque according to the supplied electric power. Then, the process ofFIG.11ends. The controller100repeatedly performs the process ofFIG.11. As such, the controller100continuously controls the lean motor25to output the roll torque appropriate for the state of the vehicle10. As discussed above, the controller100controls each of the lean motor25and the steering motor65to generate the roll torque which makes the roll angle Ar close to the target roll angle Art. As a result, the vehicle10can travel at the roll angle Ar appropriate for the input angle AI. Then, the vehicle10can make a turn appropriate for the input angle AI. It should be noted that the main control unit110(FIG.7) and the drive device control unit300serve as a drive controller900for controlling the drive motors51R,51L. The drive controller900controls the drive motors51R,51L to achieve an acceleration appropriate for the accelerator operation amount Pa and a deceleration appropriate for the brake operation amount Pb. B. Second Embodiment FIG.12shows a perspective view of a vehicle in a second embodiment. In this embodiment, the vehicle10ais a four-wheel vehicle having two front wheels FRa, FLa and two rear wheels RRa, RLa. The two front wheels FRa, RLa are turn wheels, and can turn in the width direction of the vehicle10a. The two rear wheels RRa, RLa are drive wheels. The vehicle10afurther has a vehicle body90a, suspensions FRs, FLs, RRs, RLs, a steering device42, a steering wheel42a, a drive motor51, and a controller100a. The wheels FRa, FLa, RRa, RLa are coupled to the vehicle body90aby the suspensions FRs, FLs, RRs, RLs, respectively. The suspensions FRs, FLs, RRs, RLs may be a variety of suspensions such as double wishbone suspension, torsion beam suspension. A drive motor51ais connected to the rear wheels RRa, RLa. The rear wheels RRa, RLa are powered by the drive motor51ato rotate. The steering device42is connected to the front wheels FRa, FLa. The steering device42may be configured in a variety of ways such as rack-and-pinion type. The steering wheel42ais connected to the steering device42. The driver can turn the traveling direction of the front wheels FRa, FLa to right or left by rotating the steering wheel42a. The steering device42has a steering motor65a. The steering motor65agenerates a torque which assists in steering. The controller100acontrols the steering motor65aand the drive motor51a. A distance Lh is a so-called wheelbase. Again in this embodiment, the roll axis AxL is located on the ground GL at the center of the vehicle body90ain its width direction. Because the front wheels FRa, FLa are turn wheels, and the rear wheels RRa, RLa are not turn wheels, a rotation center Rac is located in the proximity of the center between the rear wheels RRa, RLa. The gravity center90acof the vehicle body90ais located on the front direction DF side of the rotation center Rac. A distance X is a distance in the front direction DF between the rotation center Rac and the gravity center90acof the vehicle body90a. A distance is a distance between the roll axis AxL and the gravity center90ac. The distance Z is the same as a height of the gravity center90acfrom the ground. The vehicle10ahas as a control related configuration a configuration obtained by modifying that ofFIG.7as follows. 1) the drive device control unit300controls the drive motor51. 2) The steering motor control unit500controls the steering motor65ainstead of the steering motor65. 3) The lean motor control unit400and the lean motor25are omitted. FIG.13is a flowchart showing an example control process of the steering motor65a. In this embodiment, the controller100acontrols the steering motor65so that a change in the wheel angle Aw results in a roll torque which makes the angular acceleration Ar″ (referred to as roll angular acceleration Ar″) of the roll angle Ar smaller. Again in this embodiment, predetermined values are used as the mass M of the vehicle body90a, the acceleration of gravity g, the distance X, the distance Z, and the wheelbase Lh. The wheel angle Aw is an angle of direction of the front wheel (e.g. Right front wheel FRa or left front wheel FLa) relative to the front direction DF of the vehicle10a. In S210, the processor110pof the main control unit110(FIG.7) acquires data from the sensors122,123,124,126,145,146. The processor110pthen determines current information, in particular, the velocity V, input angle AI, wheel angle Aw, roll angle Ar, yaw angular velocity Ay′, accelerator operation amount Pa, brake operation amount Pb. In S220a, the processor110pcalculates the angular acceleration Ar″ of the roll angle Ar. Initially, the processor110puses the roll angle Ar to calculate the angular velocity Ar′. The method of calculating the angular velocity Ar′ (more specifically, the method of calculating derivative values of parameters) may include a variety of methods. In this embodiment, the processor110pcalculates a difference between the current roll angle Ar and the roll angle Ar at a point of time in the past by a predetermined time difference from current time by subtracting the latter from the former. The processor110pthen employs as the angular velocity Ar′ a value obtained by dividing the difference by the time difference. The processor110puses the angular velocity Ar′ to calculate the angular acceleration Ar″ in the same manner, which is a derivative value of the angular velocity Ar′. In S230a, the processor110pdetermines a target roll torque Tqt for making the roll angular acceleration Ar″ smaller. Equation B1 below is a formula of the roll torque Tqr which acts on the vehicle body90awhen the roll angular acceleration is Ar″. Tqr=(I+M*Z2)*Ar″(Equation B1) The roll torque Tqr is approximated by two components [I*Ar″] and [M*Z2*Ar″]. The variable I is an inertia moment of the vehicle body90when the rotational axis passes through the gravity center90c(where the rotational axis is parallel to the roll axis AxL). [M*Z2] is an additional term when the rotational axis away from the gravity center90cby the distance Z. The coefficient [I+M*Z2] is determined in advance by experimentally measuring a ratio of the roll torque Tqr to the roll angular acceleration Ar″. The processor110puses Equation B1 described above to calculate the roll torque Tqr which acts on the vehicle body90awhen the roll angular acceleration of the vehicle body90ais Ar″. The processor110pthen employs as the target roll torque Tqt a roll torque obtained by inverting the direction of the roll torque Tqr. FIG.14(A)is a graph showing an example relationship between the roll angular acceleration Ar″ and the target roll torque Tqt. The horizontal axis represents the roll angular acceleration Ar″, and the vertical axis represents the target roll torque Tqt. At the origin O, Ar″=0, and Tqt=0. As shown, the larger the absolute value of the roll angular acceleration Ar″ is, the larger the absolute value of the target roll torque Tqt is. The direction (i.e. positive/negative sign) of the target roll torque Tqt is opposite to the direction (i.e. positive/negative sign) of the roll angular acceleration Ar″. In S235a(FIG.13), the processor110pdetermines the angular velocity of the wheel angle Aw (referred to as additional angular velocity Awd′) required to generate the target roll torque Tqt. The additional angular velocity Awd′ is calculated by substituting the target roll torque Tqt for the first type roll torque Tqa in Equation A8 described above. FIG.14(B)-FIG.14(D)are graphs showing examples of the additional angular velocity Awd′. InFIG.14(B), the horizontal axis represents the absolute value of the target roll torque Tqt, and the vertical axis represents the absolute value of the additional angular velocity Awd′. As shown, the larger the absolute value of Tqt is, the larger the absolute value of Awd′ is. When the target roll torque Tqt is constant, the additional angular velocity Awd′ can change depending on the velocity V and the wheel angle Aw. It should be noted that in this embodiment, the processor110psets a fourth upper limit Lm4to the absolute value of Awd′. InFIG.14(C), the horizontal axis represents the velocity V, and the vertical axis represents the absolute value of the additional angular velocity Awd′. The larger the velocity V is, the smaller the absolute value of Awd′ is. In this embodiment, the absolute value of Awd′ is inversely proportional to V, as also indicated in Equation A8 described above. In order to prevent the additional angular velocity Awd′ from diverging when the velocity V is smaller, the absolute value of Awd′ is limited to the fifth upper limit Lm5. InFIG.14(D), the horizontal axis represents the absolute value of the wheel angle Aw, and the vertical axis represents the absolute value of the additional angular velocity Awd′. The larger the absolute value of the wheel angle Aw is, the smaller the absolute value of Awd′ is. In this embodiment, as the absolute value of Aw increases, the absolute value of Awd′ decreases according to cos2(Aw), as also indicated in Equation A8 described above. In S240a(FIG.13), the processor110pdetermines a control parameter. In this embodiment, the processor110pdetermines a P gain Gp2for proportional control (sometimes referred to as second gain Gp2). It should be noted that the processor110pperforms S220a-S235aand S240ain parallel. In S270a, the processor110pthen determines a control value Cw2through the proportional control using the additional angular velocity Awd′ and the second gain Gp2(e.g. Cw2=Awd′*Gp2). It should be noted that in this embodiment, the second gain Gp2is determined to be a predetermined value. Instead, the second gain Gp2may be a variable value that varies depending on another parameter. In S280a, the processor110pprovides data indicative of the control value Cw2to the steering motor control unit500. The processor500pof the steering motor control unit500controls the electric power to be supplied to the steering motor65aaccording to the control value Cw2. Specifically, the processor500pprovides the data indicative of the control value Cw2to the electric power control module500c. The electric power control module500ccontrols the electric power to be supplied to the steering motor65aaccording to the control value CW2. The steering motor65aoutputs the turning torque according to the supplied electric power. Then, the process ofFIG.13ends. The controller100arepeatedly performs the process ofFIG.13. As such, the controller100acontinuously controls the steering motor65ato output the turning torque which suppresses the roll angular acceleration Ar″. When the vehicle10atravels on a rough road, oscillation from side to side (i.e. roll oscillation) of the vehicle body90ais suppressed. As discussed above, the control value Cw2indicates the turning torque mapped to the additional angular velocity Awd′ (S270a). A parameter Tqa mapped to the additional angular velocity Awd′ according to Equation A8 indicates the first type roll torque Tqa to be generated due to the additional angular velocity Awd′. In S235a, the target roll torque Tqt is used as the parameter Tqa indicative of the first type roll torque to calculate the additional angular velocity Awd′ according to Equation A8. Therefore, the target roll torque Tqt indicates the target torque of the first type roll torque. In S230a, a roll torque obtained by inverting the direction of the roll torque Tqr which acts on the vehicle body90awhen the roll angular acceleration is Ar″ is used as the target roll torque Tqt. As indicated in Equation B1 described above, the magnitude of the target roll torque Tqt (i.e. the magnitude of the roll torque Tqr) increases with an increase in the magnitude of the roll angular acceleration Ar″. The direction of the target roll torque Tqt (i.e. a direction opposite to the roll torque Tqr) is opposite to the direction of the roll angular acceleration Ar″. In this manner, the roll angular acceleration Ar″ indicates the reference roll torque which is a reference of the first type roll torque to be generated due to the additional angular velocity Awd′. The magnitude of the roll angular acceleration Ar″ indicates a reference magnitude which is the magnitude of the reference roll torque. The direction opposite to the direction of the roll angular acceleration Ar″ indicates a reference direction which is the direction of the reference roll torque. The roll angular acceleration Ar″ is an example of reference information which indicates the reference direction as a reference of direction and the reference magnitude as a reference of magnitude for the first type roll torque to act on the vehicle body90a(hereinafter, the roll angular acceleration Ar″ may be referred to as reference information Ar″). The controller100acontrols the steering motor65aaccording to the control value Cw2to be determined using the reference information Ar″. As such, the steering motor65agenerates the turning torque so that the direction of the first type roll torque is the same as the reference direction, and the magnitude of the first type roll torque increases with an increase in the reference magnitude. If the steering motor65ais controlled according to the control value Cw2, the roll angular acceleration Ar″ is suppressed from increasing, and thus the roll angle Ar is suppressed from changing. FIG.14(E)-FIG.14(G)are graphs showing examples of the turning torque Tqw to be controlled in the process ofFIG.13. InFIG.14(E), the horizontal axis represents the absolute value of the control value Cw2, and the vertical axis represents the absolute value of the turning torque Tqw. The absolute value of the turning torque Tqw increases with an increase in the absolute value of the control value Cw2. It should be noted that in this embodiment, the processor110pmodifies the absolute value of the control value Cw2to a predetermined upper limit CwM2in S280aofFIG.13if the absolute value of the control value Cw2is equal to or larger than the upper limit CwM2. Accordingly, the absolute value of the turning torque Tqw is limited to an upper limit Lm6mapped to the upper limit CwM2. As a result, the wheel angle Aw is suppressed from changing rapidly. InFIG.14(F), the horizontal axis represents the roll angular acceleration Ar″, and the vertical axis represents the turning torque Tqw. At the origin O, Ar″=0, and Tqw=0. In this figure, assume that the velocity V, input angle AI, wheel angle Aw, and yaw angular velocity Ay′ each are constant. Such a condition can be reproduced by placing the vehicle10aon the turntable, as in the condition ofFIG.10(B). The absolute value of the target roll torque Tqt to be determined in S280a(FIG.13) increases with an increase in the absolute value of the roll angular acceleration Ar″. Accordingly, the larger the absolute value of the roll angular acceleration Ar″ is, the larger the absolute value of the turning torque Tqw is also (however, the absolute value of the turning torque Tqw is limited to the upper limit Lm6). In addition, the roll angular acceleration Ar″ being a positive value indicates that the reference roll direction is the left direction opposite to the right direction which is the direction of the roll angular acceleration Ar″. As can be understood fromFIG.6(B),FIG.6(C), when turning the direction of the front wheels FRa, FLa to the right direction DR, the direction of the first type roll torque Tqa is the left direction DL. Accordingly, in order to generate the first type roll torque Tqa of the left direction DL, a positive turning torque Tqw is generated that turns the direction of the front wheels FRa, FLa to the right direction DR. In contrast, when the roll angular acceleration Ar″ is a negative value, a negative turning torque Tqw is generated. InFIG.14(G), the horizontal axis represents the absolute value of the wheel angle Aw, and the vertical axis represents the absolute value of the turning torque Tqw. This graph illustrates characteristics under the condition (referred to as first condition) that each of the velocity V and the roll angular acceleration Ar″ (i.e. the reference direction and reference magnitude) is maintained constant (the absolute value of the roll angular acceleration Ar″ is larger than zero). In order to identify the relationship between the wheel angle Aw and the turning torque Tqw, assume that the other parameters (e.g. AI, Ay′) are constant. The wheel angle Aw is variable. In order to realize such a condition, the vehicle10ais placed on the turntable described with regard toFIG.10(C). As shown, even if the roll angular acceleration Ar″ is constant, the absolute value of the turning torque Tqw decreases as the absolute value of the wheel angle Aw increases. This reason is that the angular velocity Aw′ of the wheel angle Aw (i.e. the turning torque Tqw) decreases according to cos2(Aw) as indicated in Equation A8 described above. In this manner, because the turning torque Tqw is controlled according to Equation A8, the controller100acan make the first type roll torque due to the angular velocity Aw′ close to the reference torque. As discussed above, the controller100aperforms the process ofFIG.13to control the steering motor65aso that the roll angular acceleration Ar″ decreases. As a result, oscillation from side to side (i.e. roll oscillation) of the vehicle body90ais suppressed. In addition, as described with reference toFIG.6(B),FIG.6(C), the first type roll torque Tqa obtained using the angular velocity Aw′ of the wheel angle Aw is generated using the inertial force F12in the direction opposite to that of the yaw angular acceleration Ay″. Accordingly, when the first type roll torque Tqa is applied to the vehicle body90a, the lateral acceleration which the driver feels is suppressed. C. Third Embodiment In the above-mentioned embodiments, the front wheels12F, FRa, FLa are turn wheels. Instead, the rear wheels may be turn wheels.FIG.15(A),FIG.15(B)are explanatory diagrams of the roll torques Tq1, Tqa when the rear wheel is a turn wheel.FIG.15(A),FIG.15(B)are explanatory diagrams similar toFIG.6(B),FIG.6(C). The vehicle10ain this embodiment has two front wheels (right front wheel FR and left front wheel FL) and one rear wheel RR. When the vehicle10bturns to the right direction DR, the rear wheel RR turns to the left direction DL. InFIG.15(A), a rotation center Rbc is shown. In this embodiment, the front wheels FR, FL are not turn wheels, and the rear wheel RR is a turn wheel. Accordingly, the rotation center Rbc is located in the proximity of the center between the front wheels FR, FL. The gravity center90bcof the vehicle body is located away from the rotation center Rbc toward the back direction DB side. A distance X in this figure represents a distance in the front direction DF between the gravity center90bcand the rotation center Rbc. The gravity center90bcof the vehicle body is located away from the rotation center Rbc by the distance X toward the back direction DB side. Accordingly, the vehicle body is subject to an inertial force component F12in the same direction as that of the yaw angular acceleration Ay″. The direction of the inertial force component F12is perpendicular to the vehicle body upward direction DVU. Also, in this embodiment, the direction from the rotation center Rbc to the gravity center90bcis approximately parallel to the back direction DB in the top view ofFIG.15(A). Accordingly, the direction of the inertial force component F12is approximately perpendicular to the back direction DB. In the top view ofFIG.15(A), the direction of the yaw angular acceleration Ay″, i.e. the direction of change in the yaw angular velocity Ay′, is clockwise. In this case, the direction of the inertial force component F12faces the right direction DR side. The formula for calculating the magnitude of the inertial force component F12is the same as the formula inFIG.6(B). InFIG.15(B), the inertial force component12is shown. It is different fromFIG.6(B),FIG.6(C)only in that the direction of the inertial force component F12(i.e. the direction of the roll torque Tq1) is an opposite direction. In this manner, when the rear wheel RR is a turn wheel, the direction of the roll torque Tq1is the same as that of the yaw angular acceleration Ay″. In addition, as can be understood fromFIG.15(A),FIG.15(B), when the rear wheel RR turns to the left direction DL, the direction of the first type roll torque Tqa is the right direction DR. Accordingly, in order to generate the first type roll torque Tqa of the right direction DR, the angular velocity Aw′ is used that turns the rear wheel RR to the left direction DL. Again in this embodiment, the turning torque of the turn wheel may be controlled according to the process inFIG.8orFIG.13. In so doing, the direction of the roll torque Tq1, Tqa described above is accounted for. D. Modifications (1) The control process of the turning actuator65,65amay be a variety of other processes instead of the control processes in the embodiments ofFIG.8,FIG.13. For example, the control processes of the above embodiments involve the process of determining an output parameter from an input parameter through the proportional control (e.g. S250(FIG.8), S270a(FIG.13), etc.). Instead of the proportional control, a variety of controls may be employed (e.g. PD (Proportional-Differential) control or PID (Proportional-Integral-Differential) control). In addition, the magnitude of the turning torque Tqw is determined using Equation A8 described above in each of the above embodiments. Accordingly, the magnitude of the turning torque Tqw increases with an increase in the reference magnitude indicated by the reference information (e.g. roll angle difference dAr or roll angular acceleration Ar″). The magnitude of the turning torque Tqw decreases with an increase in the velocity V. The magnitude of the turning torque Tqw decreases with an increase in the wheel angle Aw. The relationship between the magnitude of the turning torque Tqw and the other parameters (e.g. velocity V, wheel angle Aw, etc.) may differ from those ofFIG.10(A)-FIG.10(C),FIG.14(E)-FIG.14(G). For example, as the magnitude of the wheel angle Aw increases, the magnitude of the turning torque Tqw decreases linearly relative to a change in the magnitude of the wheel angle Aw. In addition, the magnitude of the turning torque Tqw may be smaller relative to the reference magnitude. For example, in the embodiments ofFIG.12,FIG.13, the magnitude of the turning torque Tqw may be set to a smaller value to prevent the front wheels FR, FL from moving significantly against a force with which the driver holds the steering wheel42a. (2) In order to determine the turning torque (e.g. control value Cw, Cw2), a variety of other parameters may be used instead of the parameters shown inFIG.8,FIG.13. For example, the following three roll torques are roll torques which acts on the vehicle body depending on the condition of the vehicle. 1) Roll torque due to the gravity which acts on the vehicle body 2) Roll torque due to the centrifugal force which acts on the vehicle body 3) Roll torque due to the yaw angular acceleration of the vehicle (FIG.6(B),FIG.6(C)) One or more parameters selected from these three parameters may be used to determine the turning torque. The turning torque may be configured to generate a remaining roll torque obtained by subtracting these roll torques from the reference roll torque. In order for the vehicle to transition from its straight forward movement to its turning movement, the vehicle body rolls quickly to the turning direction. In this case, the lower portion of the vehicle body can move to a direction opposite to the turning direction because the gravity center of the vehicle body cannot move quickly. For example, the intersection point P2between the turning axis Ax1of the turn wheel (in this case, the front wheel12F) and the ground GL inFIG.1(A)can move to the direction opposite to the turning direction. As a result, if the vehicle has a positive trail Lt, the turn wheel can turn to the direction opposite to the turning direction. As such, the processor110pmay determine the final control value Cw, Cw2using a control value indicative of a component of turning torque which causes the turn wheel to turn to the turning direction when the vehicle body rolls quickly. Such a control value may be obtained by multiplying any of the following parameters by a gain. 1) The angular velocity Aw′ of the wheel angle Aw 2) The torque of the lean motor25 3) The angular velocity Ar′ of the roll angle Ar 4) The angular acceleration Ar″ of the roll angle Ar 5) The angular velocity AI′ of the input angle AI 6) The angular acceleration AI″ of the input angle AI When the magnitudes of these parameters are larger, the vehicle body rolls quickly, and therefore these parameters are suited for determining the control value. It should be noted that as discussed above, when the velocity V is larger, the turn wheel can turn to the roll direction due the to the gyroscopic moment. Accordingly, the gain is preferably larger when the velocity V is smaller. As discussed above, the gyroscopic moment causes the turning torque to act on the rotating wheel. The processor110pmay use this turning torque to correct the turning torque Tqw of the steering motor65,65a. The turning torque due to the gyroscopic moment can be calculated using the velocity V and the roll angle Ar, for example. In addition, when the wheel is leaning to right or left, a so-called camber thrust acts on the wheel. Accordingly, the camber thrust causes the turning torque to act on the wheel. The processor110pmay use this turning torque to correct the turning torque Tqw of the steering motor65,65a. The turning torque due to the camber thrust can be calculated using the velocity V and the roll angle Ar, for example. (3) The method of determining the turning torque from the additional angular velocity Awd′ may be a variety of other methods instead of the methods ofFIG.8,FIG.13. For example, the processor110pmay determine the target wheel angle by integrating the additional angular velocity Awd′. The processor110pthen may control the steering motor65so that the current wheel angle Aw approaches the target wheel angle. (4) The method of setting the upper limit to the turning torque may be a variety of methods. For example, in the example ofFIG.9(F), when the velocity V is equal to or smaller than the threshold VL, the absolute value of the additional angular velocity Awd′ is limited to the first upper limit Lm1. Instead, when the velocity V is equal to or smaller than the threshold VL, the processor110pmay control the steering motor65,65aassuming that the velocity V is equal to the threshold VL. (5) The target roll angle Art (FIG.8: S220) may be determined using another piece of information (e.g. the velocity V) in addition to the input angle AI. (6) A measured value may be used as the mass M of the vehicle body instead of the predetermined value. The vehicle body10(FIG.1(A)) may include a sensor for measuring the mass M of the vehicle body90. Such a sensor may be a sensor which detects a stroke position of the right suspension70R (FIG.2), for example. The larger the mass M of the vehicle body90is, the shorter the entire length of the right suspension70R is. Accordingly, the stroke position is a parameter which is correlated with the mass M. The processor110pmay determine the entire length from the stroke position to estimate the mass M from the determined entire length. (7) A measured position may be used as the position of gravity center of the vehicle body instead of the predetermined position. For example, the vehicle10(FIG.1(A)) may include a front sensor for measuring a stroke position of the front fork17and a rear sensor for detecting a stroke position of the right suspension70R (FIG.2). If the gravity center is located on the front direction DF side, a larger load is applied on the front fork17, and the entire length of the front fork17decreases accordingly. If the gravity center is located on the back direction DB side, a larger load is applied on the right suspension70R, and the entire length of the right suspension70R decreases accordingly. The processor110pcan use the entire lengths of the front fork17and right suspension70R to estimate the position of gravity center in the front direction DF. The processor110pcan use the estimated position of gravity center to calculate the distance X (FIG.6(E), etc.) between the rotation center and the gravity center. A predetermined position may be used as the rotation center. Alternatively, the processor110pmay estimate the distance Z of the gravity center by oscillating the vehicle body to right and left. For example, the processor110pcauses the lean motor25to output a torque which rolls the vehicle body. If the distance Z is shorter, the roll angle Ar changes quickly. If the distance Z is longer, the roll angle Ar changes slowly. In this manner, it can be presumed that the larger the angular velocity Ar′ or angular acceleration Ar″ of the roll angle Ar resulting from the constant torque is, the shorter the distance Z is. (8) The data indicative of a parameter (e.g. physical quantity such as the velocity V) used for the control may be a variety of data correlated with the parameter. For example, the vehicle velocity sensor122outputs data indicative of the rotational rate of the front wheel12F as data indicative of the velocity V. (9) The direction sensor126(FIG.1(A)) may output data indicative of yaw angular velocity about an axis parallel to the vertically upward direction DU (FIG.6(D). In this case, the processor110pcan use the roll angle Ar to correct a deviation between the magnitude of the yaw angular velocity relative to the vertically upward direction DU and the magnitude of the yaw angular velocity Ay′ relative to the vehicle body upward direction DVU. Alternatively, the direction sensor126may output data indicative of yaw angular acceleration instead of the yaw angular velocity. In this case, the processor110pmay determine the yaw angular velocity by integrating the yaw angular acceleration. (10) The method of defining a correspondence relationship between one or more control parameters (such as the velocity V, input angle AI) and the control value Cw, Cw2(i.e. the turning torque) may be any other method instead of the method involving the above-mentioned calculation. For example, map data may be provided in advance that defines the correspondence relationship between the one or more control parameters and the control value Cw, Cw2. The processor110pmay reference this map data to identify the control value Cw, Cw2. (11) The reference information may be a variety of information which indicates a reference direction as a reference of direction and a reference magnitude as a reference of magnitude for the first type roll torque to act on the vehicle body, instead of the roll angle difference dAr and the roll angular acceleration Ar″. In addition, the method of determining the reference information may be a variety of methods. For example, the vehicle may include an automatic driving control device (e.g. computer) which automatically drives the vehicle. The automatic driving control device may determine a target turning radius according to a current location of the vehicle on a predetermined travel route. The processor110puses the target turning radius and the current velocity V to calculate the target roll angle Ar according to Equation 6 described above. The processor110pthen may use the target roll angle Ar and the current roll angle Ar to determine the roll angle difference dAr (i.e. the reference information dAr). (12) The force generator configured to generate a force which changes the yaw angular acceleration may be any other device instead of the steering motor65,65a. For example, the force generator may be a fan device which produces airflow flowing to right or left relative to the vehicle body. The drive system51S (FIG.2) (i.e. the drive motors51R,51L) can also change the yaw angular acceleration by controlling a ratio of torque between the right rear wheel12R and the left rear wheel12L (such a control of torque ratio is also referred to as torque vectoring). Also, if the vehicle10includes a brake device for the right rear wheel12R and a brake device for the left rear wheel12L, these brake devices can change the yaw angular acceleration by controlling a ratio of braking force between the right rear wheel12R and the left rear wheel12L. The force generator may include one or more types of devices (such as the steering motor65, the drive system51S, the brake device). In addition, the force controller may include one or more types of controllers for controlling the one or more types of force generators, respectively. (13) The configuration of the lean device configured to lean the vehicle body in the width direction of the vehicle may be a variety of other configurations instead of the configuration of the link mechanism30(FIG.2). For example, the link mechanism30may be substituted with a pedestal. The motor51L,51R are secured to the pedestal. And, the first support portion82is coupled to the pedestal via a bearing rotatably in the width direction. The lean motor25rotates the first support portion82in the width direction relative to the pedestal. This enables the vehicle body90to lean to each of the right direction DR side and the left direction DL side. Alternatively, the lean device may include a left sliding device and a right sliding device (e.g. hydraulic cylinder). The left sliding device may connect the left rear wheel12L and the vehicle body, and the right sliding device may connect the right rear wheel12R and the vehicle body. Each sliding device can change the position of the wheel relative to the vehicle body in the vehicle body upward direction DVU. (14) A variety of configurations may be employed as the total number and arrangement of the plurality of wheels. For example, the plurality of wheels may include a pair of wheels spaced apart from each other in the width direction of the vehicle. A front wheel (e.g. the front wheel12F ofFIG.1(A)) may be a drive wheel. The total number of turn wheel(s) may be any number equal to or larger than one. At least one of front wheel(s) or rear wheel(s) may include turn wheel(s). Both the front wheel(s) and the rear wheel(s) may be turn wheels. The turn wheels may include a pair of wheels spaced apart from each other in the width direction of the vehicle. (15) The configuration of turn wheel support device for supporting the turn wheel may be a variety of other configuration instead of the configuration of the front wheel support device41described with reference toFIG.1(A)etc. For example, the supporting member which rotatably supports the turn wheel may be a cantilevered member instead of the fork17. In addition, the turning device that supports the supporting member turnably in the width direction relative to the vehicle body may be a variety of other devices instead of the bearing68. For example, the turning device may be a link mechanism coupling the supporting member to the vehicle body. In general, the turn wheel support device may be a variety of devices which support the turn wheel so that the direction of the turn wheel can turn in the width direction of the vehicle. The turn wheel support device may include K (K is an integer equal to or larger than 1) supporting members. Each supporting member may rotatably support one or more turn wheels. The turn wheel support device may include K turning devices secured to the vehicle body. The K turning devices may support the K supporting members turnably in the width direction, respectively. (16) The configuration of turning actuator may be a variety of configurations configured to apply a turning torque, which is a torque for controlling the turn of the turn wheel in the width direction, on the turn wheel, instead of the configuration of the steering motor65(FIG.1). For example, the turning actuator may include a pump, and may use fluid pressure (e.g. oil pressure) from the pump to generate the turning torque. In any case, the turning actuator may be configured to apply the turning torque on each of the K supporting members. For example, the turning actuator may be coupled to each of the K supporting members. (17) The configuration of the controller100may be a variety of configurations which include a force controller configured to control a force generator (e.g. the steering motor65,65a). For example, the controller100may be configured using a single computer. At least part of the controller100may be configured with dedicated hardware such as ASIC (Application Specific Integrated Circuit). For example, the steering motor control unit500inFIG.7may be configured with an ASIC. The controller100may be an electric circuit with a computer, or may be an electric circuit without any computer instead. In addition, input values and output values mapped by map data (e.g. the map data MAr, etc.) may be mapped by any other element. For example, an element such as mathematical function, analog electric circuit, etc. may map the input values to the output values. (18) The configuration of vehicle may be a variety of other configurations instead of the configurations in the embodiments. For example, the drive device for driving the drive wheels may include at least one of electric motor or internal combustion engine. The maximum riding capacity of the vehicle may be two or more persons instead of one person. The vehicle may be an apparatus which travels without at least one of person or load. The vehicle may be an apparatus which travels via remote control. The correspondence relationship used to control the vehicle (e.g. the correspondence relationship represented by the map data) may be determined experimentally to allow the vehicle to travel properly. In each embodiment described above, some of the components which are achieved by hardware may be substituted with software while some or all of the components which are achieved by software may be substituted with hardware. For example, the function of the controller100inFIG.7may be achieved by a dedicated hardware circuitry. In addition, if some or all of the functions of the present disclosure are achieved by a computer program, the program can be provided in the form of a computer-readable storage medium (e.g. non-transitory storage medium) having the program stored therein. The program can be used while being stored in a storage medium (computer-readable storage medium) which is the same as or different from the provided storage medium. The “computer-readable storage medium” is not limited to a portable storage medium such as memory card or CD-ROM, but may also include an internal storage within the computer such as various types of ROM, and an external storage connected to the computer such as hard disk drive. The present disclosure has been described above with reference to the embodiments and the modifications although the above-described embodiments are intended to facilitate the understanding of the disclosure, but not to limit the disclosure. The present disclosure may be modified or improved without departing from the spirit of the disclosure, and includes its equivalents. INDUSTRIAL APPLICABILITY The present disclosure can be preferably used for a vehicle. DESCRIPTION OF THE REFERENCES 10,10a,10bvehicle11seat12F, FRa, FLa, FR, FL front wheel12L left rear wheel12R right rear wheelRRa, RLa, RR rear wheel17front fork20main body20afront wall portion20bbottom portion20crear wall portion20dsupport portion21center longitudinal link member25lean motor30lean device (link mechanism)31D lower lateral link member31U upper lateral link member33L left longitudinal link member33R right longitudinal link member38bearing39bearing41front wheel support device41asteering wheel42steering device42asteering wheel45accelerator pedal46brake pedal51L left drive motor51R right drive motor51S drive system51adrive motor65turning actuator65,65asteering motor68bearing70suspension system70L left suspension70R right suspension71R,71L coil spring72R,72L shock absorber75connector rod80rear wheel support82first support portion83second support portion90,90avehicle body90c,90ac,90bcgravity center100,100acontroller110main control unit110p,300p,400p,500pprocessor110v,300v,400v,500vvolatile memory110n,300n,400n,500nnon-volatile memory110g,300g,400g,500gprogram300c,400c,500celectric power control module120battery122vehicle velocity sensor123input angle sensor124wheel angle sensor126direction sensor126aacceleration sensor126ccontrol unit126ggyroscope sensor145accelerator pedal sensor146brake pedal sensor300drive device control unit400lean motor control unit500steering motor control unit900drive controller910turn controllerAxw1rotational axisAxw2rotational axisAxw3rotational axis | 112,904 |
11858570 | DETAILED DESCRIPTION Hereinafter, the present disclosure will be described with reference to the accompanying drawings. However, the present disclosure can be implemented in a variety of different forms, and therefore, should not be limited to the embodiments described herein. In the following description, parts that are irrelevant to the present disclosure are omitted to clearly describe the disclosure, and the same or similar elements are denoted with the same or similar reference numerals throughout the description. Throughout the description, when a portion is described as being “connected (joined, contacted, coupled)” to another portion, it includes not only a circumstance when the portions are “directly joined”, but also a circumstance when the portions are “indirectly connected” via another member present therebetween. In addition, when a portion is described as “comprising (including)” an element, unless specified to the contrary, it intends to mean that the portion may additionally include another element, rather than excluding the same. The terms used herein are only for describing certain exemplary embodiments, and not intended to limit the scope of the disclosure. Unless otherwise specified, a singular expression includes a plural expression. The term “comprise” or “have” as used herein is intended to designate an existence of features, numbers, steps, operations, elements, components or a combination of these, and accordingly, this should not be understood as precluding an existence or a possibility of adding one or more of other features, numbers, steps, operations, elements, components or a combination of these. Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. FIG.1illustrates an autonomous mobile robot according to an embodiment of the present disclosure,FIG.2is a side view of the autonomous mobile robot according toFIG.1,FIGS.3and4illustrate an operation of a driving module of the autonomous mobile robot according toFIG.1,FIG.5is a front view of the autonomous mobile robot according toFIG.1,FIG.6is a rear view of the autonomous mobile robot according toFIG.1, andFIG.7is a top view of the autonomous mobile robot according toFIG.1. As illustrated, an autonomous mobile robot1may include an upper module100, a lower module200, and a driving module300. In the present disclosure, the autonomous mobile robot collectively refers to robots that are capable of autonomously generating a path without requiring a user to input a specific driving path and moving in indoor and outdoor environments, and can be used for purposes such as logistics, advertising, guidance, pets, security, cleaning, transportation, hobbies, and the like, without limitation. In addition, it should be understood that all cases when the user inputs the starting point and the destination, sets driving conditions (e.g., restriction of driving during certain hours), restricts or sets part of the driving path (e.g., restricts driving on automobile roads, intersections, or the like, or allows driving only under the user's supervision), or when the user controls the robot's driving in some driving paths, and the like also fall within the scope of the autonomous mobile robot of the present disclosure. The upper module100may have a cargo space provided therein, and may be provided with a cover130. The lower module200may be positioned under the upper module100and provide a driving force to the driving module300. The driving module300may be provided in the lower module200. The driving module300may include plural pairs of wheels301;302; and303that may asynchronously contact a road surface or ground to overcome a step or a stair. This will be explained in more detail as follows. The upper module100may include a main body110including the cargo space provided therein, and the cover130openably connected to an upper side of the main body110. An indicator light111may be provided on an upper circumference of the main body110. The indicator light111may be provided along the entire circumference of a top side of the main body110, and may be formed of a plurality of LEDs. The indicator light111may indicate a state of the autonomous mobile robot1to the outside. To this end, the indicator light111may have a plurality of compartments and express a plurality of colors. By using the indicator light111, it is possible to indicate that the autonomous mobile robot1is currently driving, or indicate a driving condition such as an autonomous driving (automatic driving/manual driving), for example. In addition, a driving direction (forward movement, backward movement, left turn, right turn, turn in place, and the like) may be externally indicated, or a driving speed may be indicated (e.g., blue at low speed, red at high speed, and the like), or other various driving conditions or driving states may be externally indicated. A camera unit112for omni-directional monitoring may be provided on a front side of the main body110. The camera unit112may sense an obstacle in front. The camera unit112may include a camera or a distance sensor, and may further include lighting. In addition, a separate member may be included to prevent foreign substances, rainwater, or the like from coming into contact with the camera unit112or to remove foreign substances, rainwater, or the like adhered onto the camera unit112. The camera or distance sensor provided in the camera unit112may face forward and downward from the main body110, and may be provided to be drivable to change an installed direction, or to control a direction. A first protection part113, which may be transparent or translucent, may be provided on the front side of the main body110. As illustrated inFIG.1, the first protection part113may be provided along the entire front side of the main body110and extended partially to the lateral sides, but is not limited to such shape. A second protection part114, which may be transparent or translucent, may be provided on a rear side of the main body110. As illustrated inFIGS.1and6, the second protection parts114may be separately provided at corners where the rear side and the lateral sides of the main body110meet each other, respectively. A camera (not illustrated) may be installed inside the first protection part113and the second protection parts114. A plurality of cameras may be installed, such as, for example, a total of four cameras may be installed at each corner of the main body110. As long as there is no problem in recognizing the external environment to detect dangerous substances or dangerous conditions, and setting the driving path, the installation positions of the cameras, how may of them are installed, and the like is not particularly limited. An infrared sensor115may be provided in the main body110. A plurality of infrared sensors115may be installed. For example, as illustrated inFIGS.1,2,5, and the like, a total of eight sensors may be installed, including three on the front side of the main body110, three on the rear side of the main body110, and one on one lateral side and one on the other lateral side of the main body110. As long as there is no problem in recognizing surrounding objects, people, pets, and the like and detecting their movements, the installation positions of the infrared sensor115, how many of them are installed, and the like is not particularly limited. A display unit116may be provided on the lateral side of the main body110. The installation position of the display unit116is not particularly limited, and the driving state, driving conditions, and the like of the autonomous mobile robot1may be displayed to the outside through the display unit116. For example, it may display moving destination or current driving speed of the autonomous mobile robot1, whether or not cargo is included, and the like. In addition, through the display unit116, it is possible to advertise or convey information for various purposes. Although not illustrated, a cargo space may be provided inside the main body110. In order to provide such a cargo space, a basket for storing cargo may be provided inside the main body110while being spatially separated from various members inside the main body110. In addition, devices for insulation, refrigeration, and freezing purposes may be provided inside for transporting food or the like. In addition, a pressure sensor such as a load cell or the like may be provided inside the main body110to detect the presence or absence of cargo and control driving conditions according to the weight of the cargo. In addition, an internal camera for detecting the presence or absence of cargo, or displaying or transmitting the status and appearance of the cargo to the outside may be provided. In addition, a sealing member for preventing rainwater ingress may be provided on an inner upper end of the main body110. In addition, a rain gutter or a rainwater drainage pipe may be provided on the inner upper end of the main body110to let out the received rainwater. The cover130may be openably connected to the upper side of the main body110. For example, the cover130may be hinged to a portion of the front side of the main body110, and an actuator or other driving means may be provided to open and close the cover130. The cover130may include an antenna131for external communication or GPS connection, a LiDAR132that can precisely sense the surrounding environment and the movement of the autonomous mobile robot1, a microphone133, and a display panel134. In addition, a speaker or other members may be further provided. For example, by installing a sensing means such as a distance sensor, a camera, or the like, it is also possible to open the cover130after recognizing obstacles above the cover130and confirming that there is no problem in opening the cover130. A separate waterproof structure may be provided to prevent externally exposed parts such as the antenna131, the LiDAR132, the microphone133, the display panel134, and the like from exposure to the external environment such as rainwater to be specific. The microphone133may be a directional microphone, or may be a microphone that is capable of sensing the position of a sound source using a plurality of microphones. The display panel134is a member capable of exchanging information with a user through the display, and may be a smart phone or a smart pad, for example. For example, for cargo transport, the user (orderer) is able to know that the cargo ordered by the user (orderer) is stored in the autonomous mobile robot1, and then open the cover130by identifying himself/herself by inputting a password on the display panel134, for example, and take out the ordered cargo. Referring toFIGS.1and6, the cover130may be hinged to the front side of the main body110, and may include a protrusion135provided with a handle136on the rear side of the main body110for the user to easily open and close the cover130. The lower module200may be provided under the upper module100. The lower module200may include a connection unit210, a front driving unit230, and a rear driving unit250. The connection unit210may be coupled to a lower end of the upper module100, and connected to the driving module300through the front driving unit230and the rear driving unit250to serve as support for them. The front driving unit230may include a driving unit that provides driving force to the front wheels, that is, to the second wheel302and the third wheel303of the driving module300, and controls positions or rotational force thereof. The driving unit may be a motor, for example. A lighting231may be provided on a front side of the front driving unit230. The lighting231may illuminate the front side of the autonomous mobile robot1, and may also serve to allow surrounding people to be aware of the existence or movement of the autonomous mobile robot1. The rear driving unit250may include a driving unit that provides a driving force to the rear wheel, that is, to the first wheel301of the driving module300, and controls position or rotational force thereof. The driving unit may be a motor, for example. A first suspension unit251may be provided between the rear driving unit250and the connection unit210to provide a suspension to the first wheel301. The first suspension unit251may include a damper and a spring. In addition, although not illustrated, other members for steering or suspension, such as a torsion beam, may be included in the rear driving unit250. The driving module300may be provided in the lower module200, and may include a plurality of wheels, that is, the first wheel301, the second wheel302, and the third wheel303, to overcome a step or a stair. In an example, the first wheel301, the second wheel302, and the third wheel303may be installed in pairs on the left and right sides of the autonomous mobile robot1, respectively. To this end, the first wheel301, the second wheel302, and the third wheel303may include two driving wheels provided on the same axis. The first wheel301, the second wheel302, and the third wheel303may be connected to separate driving units such as motors respectively so as to be individually driven. Such individual drive control enables various operations such as forward movement, backward movement, left turn, right turn, turn in place, and the like. In addition, it is possible to control the driving module300in consideration of various environmental conditions such as road surface conditions, presence or absence of surrounding pedestrians, and the like. The second wheel302and the third wheel303may be constrained in position relative to each other so as to be provided as one module. Referring toFIGS.2to4, the third wheel303may be installed at one end of a straight front bar304. The other end of the front bar304may be hinged to an intermediate axis portion307of a straight rear bar305. Accordingly, the front bar304is pivotable about the intermediate axis portion307of the rear bar305within a predetermined angle, but is constrained from movement by a second suspension unit306to be described below. The second wheel302may be installed at one end of the rear bar305. The second wheel302and the third wheel303positioned on one side (positioned adjacent to each other) may be driven by one driving unit. That is, one motor may be installed in the front driving unit230, and driving force of the motor may be simultaneously transmitted to the second wheel302and the third wheel303through a power transmission means such as a belt, chain, or the like. Alternatively, it is also possible to transmit the driving force to only one of the second wheel302and the third wheel303through the clutch means, or also to properly distribute the driving force to be transmitted to the second wheel302and the third wheel303. A second suspension unit306may be provided between one end of the front bar304and the other end of the rear bar305. The relative positions of the second wheel302and the third wheel303may be limited within a predetermined range by the second suspension unit306. The second suspension unit306may include a damper and a spring. Meanwhile, the first suspension unit251and the second suspension unit306described above may be formed to have variable rigidity or variable damping force. The second wheel302and the third wheel303, which are formed as one module, may be pivotable (swingable) about the intermediate axis portion307. A separate driving unit may be required for this purpose. The first wheel301, the second wheel302, and the third wheel303provided in the driving module300are limited in size. The size of the wheels is limited within a certain size because it greatly affects the size, safety, and stability of the autonomous mobile robot1, and accordingly, driving of the wheels is limited by plenty of steps, protrusions, stairs, grooves, and the like that may be present on the actual road surface. The autonomous mobile robot1according to an embodiment of the present disclosure has the second wheel302and the third wheel303integrally pivotable as one module, and thus is able to overcome a step, a stair, and the like. For example, as illustrated inFIG.3, both the second wheel302and the third wheel303may be moved while being in contact with the road surface during normal driving. In this state, when encountered with a step larger than the second wheel302and the third wheel303, the second wheel302and the third wheel303module may be pivoted as illustrated inFIG.4to change the positions of the wheels such that only the second wheel302is in contact with the road surface, while the third wheel303is in contact with an upper portion of the step. That is, it can be said that the second wheel302and the third wheel303asynchronously contact the road surface or the ground. Then, the autonomous mobile robot1can ride over the step by the driving force of the third wheel303that is in contact with the step. Repeating this process enables a stable driving even at the consecutive steps such as a stair. Meanwhile, by the pivoting of the second wheel302and the third wheel303forming one module, it is possible to bring only one or both of the second wheel302and the third wheel303into contact with the road surface or ground. Accordingly, whether or not the second suspension unit306is operated, or operating conditions thereof can be controlled, and an appropriate suspension may be provided to the autonomous mobile robot1according to the presence or absence of cargo. Accordingly, driving stability according to driving conditions can be enhanced. The foregoing description of the present disclosure is for illustrative purposes only, and those of ordinary skill in the art to which the present disclosure pertains will be able to understand that modifications to other specific forms can be easily performed without changing the technical spirit or essential features of the present disclosure. Therefore, it should be understood that the embodiments described above are illustrative and non-limiting in all respects. For example, each component described as a single type may be implemented in a distributed manner, and similarly, components described as being distributed may also be implemented in a combined form. While the scope of the present disclosure is represented by the claims accompanying below, the meaning and the scope of the claims, and all the modifications or modified forms that can be derived from the equivalent concepts will have to be interpreted as falling into the scope of the present disclosure. MODE FOR EMBODYING INVENTION The mode for embodying the invention has been described above in the best mode for embodying the invention. INDUSTRIAL APPLICABILITY Accordingly, by using the autonomous mobile robot1according to the embodiment of the present disclosure, it is possible to autonomously generate and move a path in indoor and outdoor environments without requiring a user to input a specific driving path. The autonomous mobile robot1can be used for general purposes such as transportation of various cargoes, advertisements and guidance, security, and the like. In particular, it is able to recognize and avoid or overcome various obstacles such as stair, stair, or the like. | 19,278 |
11858571 | The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. As used in this patent, stating that any part (e.g., a layer, film, area, region, or plate) is in any way on (e.g., positioned on, located on, disposed on, or formed on, etc.) another part, indicates that the referenced part is either in contact with the other part, or that the referenced part is above the other part with one or more intermediate part(s) located therebetween. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Stating that any part is in “contact” with another part means that there is no intermediate part between the two parts. Although the figures show layers and regions with clean lines and boundaries, some or all of these lines and/or boundaries may be idealized. In reality, the boundaries and/or lines may be unobservable, blended, and/or irregular. DETAILED DESCRIPTION Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components. As used herein, the orientation of features is described with reference to a lateral axis, a vertical axis, and a longitudinal axis of the vehicle associated with the features. As used herein, the longitudinal axis of the vehicle is parallel to a centerline of the vehicle. The terms “rear” and “front” are used to refer to directions along the longitudinal axis closer to the rear of the vehicle and the front of the vehicle, respectively. As used herein, the vertical axis of the vehicle is perpendicular to the ground on which the vehicle rests. The terms “below” and “above” are used to refer to directions along the vertical axis closer to the ground and away from the ground, respectively. As used herein, the lateral axis of the vehicle is perpendicular to the longitudinal and vertical axes and is generally parallel to the axles of the vehicle. In general, the attached figures are annotated with a set of axes including the lateral axis (Y), the longitudinal axis (X), and the vertical axis (Z). As used herein, the terms “longitudinal,” and “axial” are used interchangeably to refer to directions parallel to the longitudinal axis. As used herein, the terms “lateral” and “horizontal” are used to refer to directions parallel to the lateral axis. As used herein, the term “vertical” and “normal” are used interchangeably to refer to directions parallel to the vertical axis. As used herein, the term “width” refers to the dimension of a vehicle along the lateral axis. As used herein, when referring to a vehicle and/or chassis, the term “length” refers to the dimension of a vehicle along the longitudinal axis. As used herein, when referring to a structural member, the term “length” refers to the dimension of the structural perpendicular to the cross-section of the structural member (e.g., the dimension of a crossmember along the lateral axis, the dimension of a side rail along the longitudinal axis, etc.). As used herein, the term “footprint” refers to the projected area of a vehicle in a plane defined by the lateral and longitudinal axes. As used herein, the term “chassis” refers to the structural components of a vehicle, and generally includes the frame of the vehicle and one or more of the suspension system(s), the steering components, the powertrain, the drivetrain, the wheels, the brakes, etc. As used herein, the term “frame” refers to the main structural component of the vehicle to which the other components are coupled. As used herein, the term “crossmember” is used to refer to structural members of the frame that extend laterally. As used herein, the term “side rail” is used to refer to structural members of the frame that extend axially. The examples disclosed herein include structural members that are generally depicted as tubes having rectangular cross-sections. However, the structural members described herein can be of any other suitable shape (e.g., circular, ovoid, polygonal, etc.). Additionally, the structural members described herein can be solid or have walls of any suitable thickness. In some examples used herein, the term “substantially” is used to describe a relationship between two parts that is within three degrees of the stated relationship (e.g., a substantially colinear relationship is within three degrees of being colinear, a substantially perpendicular relationship is within three degrees of being perpendicular, a substantially parallel relationship is within three degrees of being parallel, etc.). Vehicles (e.g., cars, trucks, vans, etc.) typically include a vehicle chassis including a vehicle frame with wheels coupled thereto. In battery-powered electric vehicles, one or more battery packs are positioned on the vehicle frame and are used to power one or more electric motors operatively coupled to the wheels. In some instances, a ride height of the vehicle is selected based on a type and/or function of the vehicle, where the ride height of the vehicle corresponds to a clearance or distance between the vehicle frame and the ground. In some known vehicles, different vehicle frames are implemented on the vehicles to configure the vehicles for different ride heights. The selection of the ride height for a vehicle includes trade-offs such as, handling, ride quality, and practicality. For example, a higher ride height allows the wheels to absorb larger road displacements (e.g., sudden changes in the road surface) and allows the vehicle to more easily drive on uneven roads without causing significant impacts to the vehicle frame. However, a lower ride height provides a lower center of mass for the vehicle, which improves the handling of the vehicle, particularly at higher speeds. Commonly, multiple vehicle frames are constructed with different structural components and geometries to produce vehicles having various ride heights. While the ride height of a vehicle can be adjusted by making modifications to the vehicle frame, modifications to the vehicle frame to adjust the ride height can be laborious and require numerous additional parts. Some examples disclosed herein implement a vehicle chassis that can be configured for two different ride heights. A first example vehicle chassis includes an example reversible vehicle frame, which includes an example central frame (e.g., a base frame) coupled between example end frames having wheels coupled thereto. The central frame is positioned at an offset (e.g., a vertical offset) from the end frames. The reversible vehicle frame is rotatable about an example longitudinal axis between a first position and a second position. The central frame is at a first distance from the ground when the reversible frame is in the first position, and the central frame is at a second distance from the ground when the reversible frame is in the second position, where the first distance is greater than the second distance. Stated differently, the reversible frame in the first position is configured for a high ride height, and the reversible frame in the second position is configured for a low ride height. Advantageously, by providing a reversible frame that is configurable for different ride heights, a number of parts required and/or manufacturing complexity of the vehicle is significantly reduced. Another example configurable vehicle chassis disclosed herein includes an example central frame (e.g., a base frame) couplable between first example frame subassemblies and second example frame subassemblies, where each of the first and second frame subassemblies defines a wheel axle. The first frame subassemblies include first bridge portions that are oriented generally upward relative to the wheel axles, and the second frame subassemblies include second bridge portions oriented generally downward relative to the wheel axles. The central frame is at a first distance from the ground when coupled between the first frame subassemblies, and the central frame is at a second distance from the ground when the central frame is coupled between the second frame subassemblies, where the first distance is greater than the second distance. Stated differently, the configurable vehicle chassis is configured for a high ride height when the central frame is coupled between the first frame subassemblies, and the configurable vehicle chassis is configured for a low ride height when the central frame is coupled between the second frame subassemblies. Another example configurable vehicle chassis disclosed herein includes example upward and downward bridge portions (e.g., first and second bridge portions) couplable between the central frame and subassemblies defining wheel axles, where the central frame and subassemblies are the same for vehicles having different ride height requirements. In examples disclosed herein, the third configurable vehicle chassis is configured for a high ride height when the central frame is coupled to the subassemblies via the upward bridge portions, and the second configurable vehicle chassis is configured for a low ride height when the central frame is to the subassemblies via the downward bridge portions. As such, the example configurable vehicle chassis are configurable for different ride heights by selectively coupling different frame subassemblies and/or bridge portions to the central frame. Advantageously, by enabling parts to be interchangeably implemented across different vehicles having different ride height requirements, a number of the parts required and/or manufacturing complexity of the vehicles is reduced. Some examples disclosed herein implement multi-position wheel assembly mounts that can be configured for at least two different ride heights. An example multi-position wheel assembly mount disclosed herein includes a plate including protrusions extending away from a surface of the plate and toward the vehicle frame. In some examples disclosed herein, the protrusions are pins that are positionable in apertures of the vehicle frame, where the apertures may be through holes in rail portions of the vehicle frame. In examples disclosed herein, the protrusions are positionable in the apertures of the frame in a first position to provide a first ride height of the vehicle and a second position to provide a second ride height of the vehicle. In some examples, the first right height is a high ride height, and the second ride height is a low ride height. Example disclosed herein do not require additional parts for the frame or body of the vehicle, thereby reducing a number of parts required and/or manufacturing complexity of the vehicles to achieve the desired ride height. Different types and models of vehicles (e.g., cars, trucks, vans, etc.) generally include different chassis and different performance requirements. That is, different types and models of vehicles have different engine performance requirements (e.g., different torque requirements, different horsepower requirements, different range requirements, etc.) and different suspension requirements (e.g., suspension stiffness requirements, travel requirements, damping requirements, camber control requirements, etc.). These performance requirements are generally related to different design considerations, including the type/class of the vehicle (e.g., pick-up truck, compact car, van, sedan, etc.), the intended role of the vehicle (e.g., everyday driving, sport driving, long-distance transport, short-distance transport, law enforcement, off-road vehicles, etc.), the weight of the vehicle, the size of the vehicle, and/or consumer preferences. These variations in design requirements make reusing parts between the chassis of different vehicle models impractical. Some examples disclosed herein implement electric motorized wheel assemblies which can be configured for different ride and/or performance needs. The example wheel assemblies disclosed herein includes swappable or interchangeable components that include an in-wheel electric motor, suspension assembly, and a suspension mounting frame (frame mounting interface). In examples disclosed herein, the components of the wheel assemblies that are connected to the vehicle frame via the frame mounting interface to allow for geometric freedom between the vehicle frame and the components without the need for traditional axle connections from the center containing the electric motor. In examples disclosed herein, the wheel assemblies also include mounting points for the suspension links and dampers. Advantageously, by providing an electric motorized wheel assembly that includes interchangeable parts that have common attachment and packaging strategies, ride and performance needs can be met for the vehicle while reducing the number of parts and complexity of manufacturing. Examples disclosed herein provide vehicle chassis with common features to receive interchangeable performance packages that enable a configurable vehicle chassis to be utilized with different vehicle models with minimal configuration changes. An example vehicle chassis disclosed herein includes cavities with features that enable different performance packages to be coupled thereto. By interchanging the interchangeable performance packages, the engine properties and suspension properties of the example vehicle chassis can be changed. Another example vehicle chassis disclosed herein includes features that enable different subframes to be coupled thereto. In some such examples disclosed herein, the different subframes include different performance packages. By interchanging the interchangeable subframes, the engine properties and suspension properties of the example vehicle chassis can be changed. Another example vehicle chassis disclosed herein includes a common battery platform, an interchangeable front chassis portion, and an interchangeable rear chassis portion. In some such examples disclosed herein, the different chassis portions include different performance packages. By interchanging the interchangeable subframes, the engine properties and suspension properties of the example vehicle chassis can be changed. Different models of vehicles (e.g., cars, trucks, vans, etc.) generally include differently-sized chassis with differently-sized components. That is, the wheelbase and the track width of a vehicle are generally driven by different design considerations, including the type/class of the vehicle (e.g., pick-up truck, compact car, van, sedan, etc.), the desired spaciousness of the passenger cabin, desired storage space, and/or packaging requirements for vehicle components. These variations in design requirements make reusing parts between the chassis of different vehicle models impractical. Examples disclosed herein provide vehicle chassis with scalable widths and lengths that enable a configurable vehicle chassis to be utilized with different vehicle models with minimal configuration changes. An example scalable vehicle chassis disclosed herein includes common chassis portions and interchangeable structural members. By interchanging the interchangeable structural members, the width and length of the example scalable vehicle chassis can be changed. Another example scalable chassis disclosed herein includes common chassis portions and adjustable structural members. By adjusting the length of the adjustable structural members, the width and length of the example scalable vehicle chassis can be changed. Another example scalable vehicle chassis disclosed herein includes a common battery platform, an interchangeable front chassis portion, and an interchangeable rear chassis portion. By interchanging the interchangeable chassis portions, the width and length of the example scalable vehicle chassis can be changed. While example vehicle chassis, frames, and modules described are generally described as distinct examples, the teachings of this disclosure can be combined, rearranged, and omitted in any suitable manner. As such, a vehicle and/or vehicle chassis implemented in accordance with the teachings of this disclosure can include some or all of the features described herein. FIG.1is a perspective view of a vehicle100. The vehicle100is a motorized wheel-driven vehicle. In the illustrated example ofFIG.1, the vehicle100is a pick-up truck. In other examples, the vehicle100can be any type of wheeled vehicle (e.g., a sedan, a coupe, a van, a pick-up truck, a sports utility vehicle, an all-terrain vehicle (ATV), farming equipment, etc.). In some examples, the vehicle100is an EV. In such examples, the vehicle100includes one or more electric motors and one or more battery arrays. In other examples, the vehicle100includes an internal combustion engine (e.g., a non-electrified vehicle, a partially electrified vehicle, etc.). FIG.2Aillustrates an example reversible frame200(e.g., a vehicle frame, a reversible vehicle frame, a kickflip reversible frame, a chassis) in accordance with teachings of this disclosure. In the illustrated example ofFIG.2A, the reversible frame200is configured for a low ride height of the example vehicle100ofFIG.1. The example reversible frame200includes an example central frame (e.g., a base frame)202coupled between example first and second end frames204,206. The example ofFIG.2Afurther includes example wheels208A,208B,208C,208D coupled to the respective first and second end frames204,206. Example battery packs210are positioned in the central frame202. While thirteen of the battery packs210are shown in this example, a different number of the battery packs210may be used instead. In this example, the first end frame204is a front frame proximate a front end of the vehicle100, and the second end frame206is a rear frame proximate a rear end of the vehicle100. In other examples, the first end frame204is proximate the rear end of the vehicle100, and the second end frame206is proximate the front end of the vehicle100. In the illustrated example ofFIG.2A, the reversible frame200is in a first position. When the reversible frame200is in the first position, the central frame202is at an example first distance212from the ground and the first and second end frames204,206are at an example second distance214from the ground. In this example, the first and second end frames204,206are positioned at an offset (e.g., a vertical offset) relative to the central frame202, where the offset is in an example vertical direction216. As such, when the reversible frame200is in the first position, the second distance214between the ground and the first and second end frames204,206is greater than the first distance212between the ground and the central frame202. In this example, when the reversible frame200in the first position is implemented in the vehicle100, the vehicle100is configured for a first ride height (e.g., a low ride height). In some examples, a first type of vehicle body (e.g., a van body) is coupled to the reversible frame200in the first position to produce a first type of vehicle (e.g., a van). Turning toFIG.2B, the example reversible frame200ofFIG.2Ais configured for a high ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.2B, the reversible frame200is in a second position. When the reversible frame200is in the second position, the central frame202is at an example third distance218from the ground and the first and second end frames204,206are at an example fourth distance219from the ground. In this example, the third distance218is greater than both the fourth distance219and the first distance212ofFIG.2A. As such, when the reversible frame200in the second position is implemented in the vehicle100, the vehicle100is configured for a second ride height (e.g., a high ride height), where the second ride height is greater than the first ride height. In some examples, a second type of vehicle body (e.g., a truck body) is coupled to the reversible frame200in the second position to produce a second type of vehicle (e.g., a truck), where the second type of vehicle is different from the first type of vehicle. In examples disclosed herein, the reversible frame200can be selectively configured for the first ride height or the second ride height by rotating about an example longitudinal axis220. For example, the reversible frame200can move between the first position shown inFIG.2Aand the second position shown inFIG.2Bby rotating 180 degrees about the longitudinal axis220. FIG.3illustrates the example reversible frame200ofFIGS.2A and/or2Brotated about the example longitudinal axis220between the first and second positions. In the illustrated example ofFIG.3, an example motor (e.g., an electric motor)302and example first and second suspension systems304A,304B are couplable to the reversible frame200in both the first and second positions. In some examples, the first and second suspension systems304A,304B are coupled to the first end frame204and operatively coupled to corresponding ones of the first and second wheels208A,208B. Additionally or alternatively, the first and second suspension systems304A,304B can be coupled to the second end frame206and operatively coupled to corresponding ones of the third and fourth wheels208C,208D. In some examples, each of the first and second end frames204,206includes mirrored attachment points positioned thereon. In such examples, the mirrored attachment points enable the first and second suspension systems304A,304B to be coupled to at least one of the first or second end frames204,206in a same orientation when the reversible frame200is in either one of the first or second positions. In the illustrated example ofFIG.3, the motor302is coupled to the first end frame204and operatively coupled to the first and second wheels208A,208B. In this example, the motor302is powered by the battery packs210, and operation of the motor302causes corresponding rotation of the first and second wheels208A,208B. Additionally or alternatively, the motor302can be coupled to the second end frame206and operatively coupled to the third and fourth wheels208C,208D, such that operation of the motor302causes corresponding rotation of the third and fourth wheels208C,208D. In some examples, multiple ones of the motor302are coupled to the reversible frame200to operate the wheels208A,208B,208C,208D. In some examples, the motor302is coupled to at least one of the first or second end frames204,206in a same orientation when the reversible frame200is in either one of the first or second positions. In other examples, the motor302is in a first orientation when the reversible frame200is in the first position, and the motor302is in a second orientation different from the first orientation when the reversible frame200is in the second position. In some such examples, the motor302is configured to rotate in a first direction when the reversible frame200is in the first position, and the motor302is configured to rotate in a second direction when the reversible frame200is in the second position, where the second direction is opposite the first direction. FIG.4is a flowchart representative of an example method400to produce the example reversible frame200ofFIGS.2A,2B, and/or3. The example method400begins at block402, at which a ride height of the vehicle100ofFIG.1is selected. For example, in response to determining that the vehicle100is to have a first ride height (e.g., block402returns a result of YES), the process proceeds to block404. Alternatively, in response to determining that the vehicle100does not have the first ride height (e.g., block402returns a result of NO), the process proceeds to block406. At block404, the example reversible frame200is rotated about the longitudinal axis220ofFIGS.2and/or3to a first position. For example, the reversible frame200is rotated to the first position shown inFIG.2A, in which the reversible frame200is configured for the first ride height. At block406, the example reversible frame200is rotated about the longitudinal axis220to a second position. For example, the reversible frame200is rotated to the second position shown inFIG.2B, in which the reversible frame200is configured for the second ride height greater than the first ride height. At block408, a first type of vehicle body is coupled to the reversible frame200. For example, the first type of vehicle body is coupled to the reversible frame200when the reversible frame200is in the first position. In some examples, the first type of vehicle body is a van body. At block410, a second type of vehicle body is coupled to the reversible frame200. For example, the second type of vehicle body is coupled to the reversible frame200when the reversible frame200is in the second position. In this example, the second type of vehicle body (e.g., a truck body) is different from the first type of vehicle body. FIG.5illustrates a first example configurable vehicle chassis500in accordance with teachings of this disclosure. In the illustrated example ofFIG.5, the first configurable vehicle chassis500includes an example central frame (e.g., a base frame)502, and example battery packs504positioned in the central frame502. While sixteen of the battery packs504are shown in this example, a different number of the battery packs504may be used instead. In this example, first and second example frame subassemblies (e.g., first and second subassemblies)506A,506B can be coupled to the central frame502to configure the first configurable vehicle chassis500for a high ride height, and third and fourth example frame subassemblies (e.g., third and fourth subassemblies)506C,506D can be coupled to the central frame502to configure the configurable central frame502for a low ride height. In this example, the first and second frame subassemblies506A,506B are substantially the same, and the third and fourth frame subassemblies506C,506D are substantially the same. As such, each of the frame subassemblies506A,506B,506C,506D can be interchangeably coupled to an example front end507A and/or to an example rear end507B of the central frame502. In the illustrated example ofFIG.5, each of the frame subassemblies506A,506B,506C,506D defines corresponding example wheel axles508A,508B,508C,508D having example wheels510coupled thereto. In this example, example motors (e.g., electric motors)512are coupled on the frame subassemblies506A,506B,506C,506D and operatively coupled to corresponding ones of the wheels510. In some examples, operation of the motors512causes rotation of the wheel axles508A,508B,508C,508D and/or the corresponding ones of the wheels510. In this example, the motors512are electrically coupled to and/or otherwise powered by the battery packs504. In the illustrated example ofFIG.5, the first and second frame subassemblies506A,506B include example first bridge portions (e.g., upward bridge portions, upwardly angled bridge portions)514, and the third and fourth frame subassemblies506C,506D include example second bridge portions (e.g., downward bridge portions, downwardly angled bridge portions)516. The first and second bridge portions514,516can be fixed (e.g., bolted, riveted, welded, etc.) to the central frame502to couple the respective frame subassemblies506A,506B,506C,506D to the central frame502. In this example, the first bridge portions514are oriented generally upward relative to the first and second wheel axles508A,508B, and the second bridge portions516are oriented generally downward relative to the third and fourth wheel axles508C,508D. As such, when the first and second frame subassemblies506A,506B are coupled to the central frame502, the central frame502is at a first distance from the ground. Further, when the third and fourth frame subassemblies506C,506D are coupled to the central frame502, the central frame502is at a second distance from the ground, where the first distance is greater than the second distance. FIG.6Aillustrates the first example configurable vehicle chassis500ofFIG.5configured for a high ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.6A, the first and second frame subassemblies506A,506B are coupled to the central frame502via the first bridge portions514. In this example, the central frame502is at an example first distance602from the ground, and the first and second wheel axles508A,508B of the respective first and second frame subassemblies506A,506B are at an example second distance604from the ground. In this example, the central frame502is positioned at a first offset (e.g., a first vertical offset) relative to the first and second wheel axles508A,508B, where the first offset is in an example upward direction606. As such, the first distance602between the ground and the central frame502is greater than the second distance604between the ground and the first and second wheel axles508A,508B. In this example, when the first configurable vehicle chassis500of the illustrated example ofFIG.6Ais implemented in the vehicle100, the vehicle100is configured for a first ride height (e.g., a high ride height). In some examples, a first type of vehicle body (e.g., a truck body) is coupled to the first configurable vehicle chassis500to produce a first type of vehicle (e.g., a truck). Turning toFIG.6B, the first example configurable vehicle chassis500ofFIG.5is configured for a low ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.6B, the third and fourth frame subassemblies506C,506D are coupled to the central frame502via the second bridge portions516. In this example, the central frame502is at an example third distance608from the ground, and the third and fourth wheel axles508C,508D of the respective third and fourth frame subassemblies506C,506D are at an example fourth distance610from the ground. In this example, the central frame502is positioned at a second offset (e.g., a second vertical offset) relative to the third and fourth wheel axles508C,508D, where the second offset is in an example downward direction612. As such, the third distance608between the ground and the central frame502is less than the fourth distance610between the ground and the first and second wheel axles508A,508B and less than the first distance602ofFIG.6A. In this example, when the first configurable vehicle chassis500of the illustrated example ofFIG.6Bis implemented in the vehicle100, the vehicle100is configured for a second ride height (e.g., a low ride height) less than the first ride height. In some examples, a second type of vehicle body (e.g., a car body) is coupled to the first configurable vehicle chassis500to produce a second type of vehicle (e.g., a car). In examples disclosed herein, the first configurable vehicle chassis500can be selectively configured for the first ride height or the second ride height based on a selection of the frame subassemblies506A,506B,506C,506D coupled to the central frame502. FIG.7illustrates a second example configurable vehicle chassis700in accordance with teachings of this disclosure. In the illustrated example ofFIG.7, the second configurable vehicle chassis700includes the central frame502couplable to example first and second subassemblies702A,702B, where the first and second subassemblies702A,702B define example axles (e.g., wheel axles)704A,704B having example wheels706coupled thereto. In this example, the example motors512are mounted on the first and second subassemblies702A,702B and operatively coupled to corresponding ones of the wheels706. In some examples, operation of the motors512causes rotation of the axles704A,704B and/or the corresponding ones of the wheels706. In this example, the motors512are electrically coupled to and/or otherwise powered by the battery packs504positioned on the central frame502. In this example, the first and second subassemblies702A,702B are couplable to the central frame502via example upwardly angled or upward bridge portions708A,708B and/or via example downwardly angled or downward bridge portions710A,710B. In the illustrated example ofFIG.7, the second configurable vehicle chassis700is configured for a high ride height when the first and second subassemblies702A,702B are coupled to the central frame502via the upward bridge portions708A,708B, and the second configurable vehicle chassis700is configured for a low ride height when the first and second subassemblies702A,702B are coupled to the central frame502via the downward bridge portions710A,710B. In this example, the upward bridge portions708A,708B are substantially the same, and the downward bridge portions710A,710B are substantially the same. As such, each of the upward and downward bridge portions708A,708B,710A,710B can be interchangeably coupled to the front end507A and/or to the rear end507B of the central frame502. Each of the upward and downward bridge portions708A,708B,710A,710B can be fixed (e.g., bolted, riveted, welded, etc.) between the central frame502and one of the first or second subassemblies702A,702B. In this example, the upward bridge portions708A,708B are oriented generally upward relative to the axles704A,704B, and the downward bridge portions710A,710B are oriented generally downward relative to the axles704A,704B. FIG.8Aillustrates the second example configurable vehicle chassis700ofFIG.7configured for a high ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.8A, the first and second subassemblies702A,702B are coupled to the central frame502via the upward bridge portions708A,708B. In this example, the central frame502is at an example first height802relative to the ground, and the axles704A,704B of the respective first and second subassemblies702A,702B are at an example second height804relative to the ground, where the first height802is greater than the second height804. In this example, when the second configurable vehicle chassis700of the illustrated example ofFIG.8Ais implemented in the vehicle100, the vehicle100is configured for a first ride height (e.g., a high ride height). In some examples, a first type of vehicle body (e.g., a truck body) is coupled to the second configurable vehicle chassis700to produce a first type of vehicle (e.g., a truck). Turning toFIG.8B, the second example configurable vehicle chassis700ofFIGS.7and/or8Ais shown configured for a low ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.8B, the first and second subassemblies702A,702B are coupled to the central frame502via the downward bridge portions710A,710B. In this example, the central frame502is at an example third height806relative to the ground, where the third height806is less than the second height804of the axles704A,704B and, thus, is less than the first height802of the illustrated example ofFIG.8B. In this example, when the second configurable vehicle chassis700of the illustrated example ofFIG.8Bis implemented in the vehicle100, the vehicle100is configured for a second ride height (e.g., a low ride height) less than the first ride height. In some examples, a second type of vehicle body (e.g., a car body) is coupled to the second configurable vehicle chassis700to produce a second type of vehicle (e.g., a car). In examples disclosed herein, the second configurable vehicle chassis700can be selectively configured for the first ride height or the second ride height based on a selection of the upward and downward bridge portions708A,708B,710A,710B coupled to the central frame502. In the illustrated example ofFIG.8A, the first height802can be adjusted by modifying an example first angle810of the upward bridge portions708A,708B relative to the central frame502. Similarly, in the illustrated example ofFIG.8B, the second height806can be adjusted by modifying an example second angle812of the downward bridge portions710A,710B relative to the central frame502. In some examples, the first and second angles810,812are the same (e.g., less than 30 degrees, less than 10 degrees, etc.). In other examples, the first and second angles810,812can be different. In some examples, one or more additional bridge portions (e.g., third bridge portions, fourth bridge portions, etc.) are couplable between the first and second subassemblies702A,702B and the central frame502. In some examples, each of the one or more additional bridge portions can be configured for a different ride height. FIG.9is a flowchart representative of an example method900to produce the first example configurable vehicle chassis500ofFIGS.5,6A, and/or6B and/or the second example configurable vehicle chassis700ofFIGS.7,8A, and/or8B. The example method900begins at block902, at which a ride height of the vehicle100ofFIG.1is selected. For example, in response to determining that the vehicle100is to have a first ride height (e.g., block902returns a result of YES), the process proceeds to block904. Alternatively, in response to determining that the vehicle100is not to have the first ride height and/or is to have a second ride height less than the first ride height (e.g., block902returns a result of NO), the process proceeds to block906. At block904, the example first bridge portions514ofFIGS.5,6A, and/or6B and/or the example upward bridge portions708A,708B ofFIGS.7,8A, and/or8B are coupled to the example central frame502. For example, the first bridge portions514of the first and second frame subassemblies506A,506B are coupled to (e.g., via one or more fasteners, chemical adhesive, a press-fit, one or more welds, etc.) or otherwise fixed to the central frame502to produce the first example configurable vehicle chassis500, and the upward bridge portions708A,708B are coupled to (e.g., via one or more fasteners, chemical adhesive, a press-fit, one or more welds, etc.) or otherwise fixed to the first and second subassemblies702A,702B and to the central frame502to produce the second configurable vehicle chassis700. In such examples, the vehicle100is configured for the first ride height. At block906, the example second bridge portions516ofFIGS.5,6A, and/or6B and/or the example downward bridge portions710A,718B ofFIGS.7,8A, and/or8B are coupled to the example central frame502. For example, the second bridge portions516of the third and fourth frame subassemblies506C,506D are coupled to (e.g., via one or more fasteners, chemical adhesive, a press-fit, one or more welds, etc.) or otherwise fixed to the central frame502to produce the first example configurable vehicle chassis500, and the downward bridge portions710A,710B are coupled to (e.g., via one or more fasteners, chemical adhesive, a press-fit, one or more welds, etc.) or otherwise fixed to the first and second subassemblies702A,702B and to the central frame502to produce the second configurable vehicle chassis700. In such examples, the vehicle100is configured for the second ride height less than the first ride height. At block908, a first type of vehicle body is coupled to the central frame502. For example, the first type of vehicle body is coupled to the central frame502when the vehicle100is configured for the first ride height. In some examples, the first type of vehicle body is a car body. At block910, a second type of vehicle body is coupled to the central frame502. For example, the second type of vehicle body is coupled to the central frame502when the vehicle100is configured for the second ride height. In this example, the second type of vehicle body (e.g., a truck body) is different from the first type of vehicle body. FIG.10illustrates example wheel assembly mounts1010A,1010B,1010C,1010D in accordance with the teachings of this disclosure. The example vehicle chassis1000ofFIG.10includes an example vehicle frame1002, example battery packs1004, an example rail portion1006of the vehicle frame1002, example wheel assemblies1008A,1008B,1008C,1008D, the example wheel assembly mounts1010A,1010B,1010C,1010D, and example apertures1012A,1012B. In the illustrated example ofFIG.10, the wheel assembly mounts1010A,1010B,1010C,1010D are coupled to the wheel assemblies1008A,1008B,1008C,1008D, respectively. The wheel assemblies1008A,1008B,1008C,1008D include the wheels, brakes, suspension, wheel bearings, etc. The wheel assembly mounts1010A,1010B,1010C,1010D are coupled to the vehicle frame1002via the apertures (e.g., the example apertures1012A,1012B) included in the rail portions (e.g., the example rail portion1006) of the vehicle frame1002. For example, the wheel assembly mount1010A is positioned in the apertures1012A,1012B included in the rail portion1006of the vehicle frame1002. Each of the wheel assembly mounts1010A,1010B,1010C,1010D is positionable in any of the wheel assembly locations of the vehicle frame1002(e.g., rail portions of the vehicle frame1002that include apertures). In the illustrated example, the wheel assembly mounts1010A,1010B,1010C,1010D are positionable in the wheel assembly locations of the vehicle frame1002to raise and lower the ride height of the vehicle100ofFIG.1. For example, the wheel assembly mount1010A can be positioned in a first position in the apertures1012A,1012B to raise the ride height and in a second position in the apertures1012A,1012B to lower the ride height. The wheel assembly mount1010A is described in further detail below in connection withFIGS.11A and11B. FIG.11Aillustrates the example wheel assembly mount1010A ofFIG.10configured for a low ride height (shown in solid lines) and a high ride height (shown in dashed lines) of the example vehicle100ofFIG.1. The wheel assembly mount1010A ofFIG.11Aincludes an example plate1102, an example first protrusion1104, an example second protrusion1106, an example first position1108corresponding to the low ride height, and an example second position1110corresponding to the high ride height. In the illustrated example, the plate1102has a rectangular shape. However, the plate1102may be any other shape suitable for attaching/coupling to the example vehicle frame1002ofFIG.10. The plate1102includes the first protrusion1104and the second protrusion1106. The first protrusion1104and the second protrusion1106extend away from a surface1107of the plate1102and toward the vehicle frame1002. In the illustrated example, the first protrusion1104and the second protrusion1106are pins that are cylindrically shaped. In some examples, the first protrusion1104and the second protrusion1106are shaped to fit in apertures included in the vehicle frame1002. However, in other examples, the plate1102may include apertures and the vehicle frame1002may include the protrusions (e.g., the first protrusion1104and the second protrusion1106). In the illustrated example ofFIG.11A, the wheel assembly mount1010A can be positioned in the first position1108or the second position1110. The first position1108provides a first ride height of the vehicle frame1002and the second position1110provides a second ride height of the vehicle frame1002. In the illustrated example, the first ride height is less than the second ride height. In other words, the first position1108provides a low ride height and the second position1110provides a high ride height. In the illustrated example, the first position1108positions a longitudinal axis1112of the wheel assembly mount1010A horizontally and the second position1110positions the longitudinal axis1112of the wheel assembly mount1010A vertically. In the first position1108, the first protrusion1104and the second protrusion1106are aligned along a longitudinal axis of the vehicle frame (e.g., horizontally aligned). In the second position1110, the first protrusion1104and the second protrusion1106are vertically aligned where the first protrusion1104is positioned higher than the second protrusion1106. FIG.11Billustrates the example wheel assembly mount1010A ofFIG.10coupled to the example rail portion1006of the example vehicle frame1002ofFIG.10for a low ride height of the example vehicle100ofFIG.1. In the illustrated example ofFIG.11B, the wheel assembly mount1010A is positioned in the first position1108for a low ride height. The illustrated example ofFIG.11Bfurther includes the example apertures1012A,1012B. The apertures1012A,1012B are adjacent to the wheel assembly location on the vehicle frame1002(e.g., on the rail portion1006). In the illustrated example, the apertures1012A,1012B are through holes in the rail portion1006. However, in other examples, the apertures1012A,1012B may be dead-ended openings in the rail portion1006. In the illustrated example, the first protrusion1104and the second protrusion1106are inserted in the corresponding apertures1012A,1012B. The wheel assembly mount1010A (coupled with the wheel assembly1008A) is coupled to the rail portion1006via the first protrusion1104, the second protrusion1106, and the apertures1012A,1012B in the first position1108to provide a low ride height for the vehicle100. In some examples, the first protrusion1104and the second protrusion1106are inserted in the apertures1012A,1012B and the protrusions1104,1106are welded to the rail portion1006to couple the wheel assembly mount1010A to the rail portion1006. In the illustrated examples ofFIGS.11A and11B, the wheel assembly mount1010A is illustrated as including two protrusions (e.g., the first protrusion1104and the second protrusion1106) that are coupled to two corresponding apertures (e.g., the apertures1012A,1012B). However, the wheel assembly mount1010A may include any number of protrusions and the rail portion1006of the vehicle frame1002may include any number of corresponding apertures. In some examples, the wheel assembly mount1010A may be positioned in more than two positions related to the number of protrusions and corresponding apertures included in the rail portion1006of the vehicle frame1002. In such examples, the wheel assembly mount1010A may be positioned for two or more different ride heights. For example, the wheel assembly mount1010A may be positioned in three different positions using the three protrusions to achieve a low ride height, a middle ride height, and a high ride height. In some examples, the rail portion1006of the vehicle frame1002contains sufficient apertures to engage all protrusions in all positions (e.g., the number of apertures is equal to the number of protrusions multiplied by the number of positions (A=Pr*Po), where A is the number of apertures, Pr is the number of protrusions, and Po is the number of positions). However, in some examples, the rail portion1006of the vehicle frame1002does not have the number of apertures equal to the number of protrusions multiplied by the number of positions, and apertures may be reused in all or some positions (e.g., low ride height, middle ride height, high ride height, etc.). FIG.11Cillustrates an example alternative wheel assembly mount1116and alternative rail portion1118of the frame1002for an adjustable ride height of the example vehicle100ofFIG.1.FIGS.11A and11Billustrate the wheel assembly mount1010A as including the plate1102with discrete protrusions (e.g., first protrusion1104and the second protrusion1106) coupled to discrete apertures (e.g., apertures1012A,1012B). The alternative wheel assembly mount1116ofFIG.11Cincludes an example plate1120and example mount through hole groups1122A,1122B, and the example alternative rail portion1118ofFIG.11Cincludes example frame through hole groups1124A,1124B,1124C. The plate1120includes the mount through hole groups1122A,1122B, which are each illustrated as three through holes positioned near each other. However, the mount through hole groups1122A,1122B can include any number of through holes and be positioned in any appropriate pattern. Although the plate1120of the alternative wheel assembly mount1116is illustrated as including two through hole groups (e.g., the mount through hole groups1122A,1122B), the plate1120can include any number of through hole groups. In the illustrated example, the alternative rail portion1118includes the frame through hole groups1124A,1124B,1124C, which are each illustrated as three through holes positioned near each other. However, the frame through hole groups1124A,1124B,1124C can include any number of through holes and be positioned in any appropriate pattern. Although the alternative rail portion1118is illustrated as including three through hole groups (e.g., the frame through hole groups1124A,1124B,1124C), alternative rail portion1118can include any number of through hole groups. In the illustrated example ofFIG.11C, the alternative wheel assembly mount1116is coupled to the alternative rail portion1118via the mount through hole groups1122A,1122B and the frame through hole groups1124A,1124B,1124C. In the illustrated example, at least one of the mount through hole groups1122A,1122B can be aligned with any one of the corresponding frame through hole groups1124A,1124B,1124C to provide different right heights for the vehicle100. For example, the mount through hole groups1122A,1122B can be aligned with the frame through hole groups1124A,1124C to provide a first ride height for the vehicle100, and the mount through hole groups1122A,1122B can be aligned with the frame through hole groups1124A,1124B to provide a second ride height for the vehicle100. In some examples, once the desired mount through hole groups1122A,1122B are aligned with the desired frame through hole groups1124A,1124B,1124C, the plate1120of the alternative wheel assembly mount1116and the alternative rail portion1118are coupled via mechanical, non-permanent attachment methods (e.g., bolts, fasteners, etc.). In the illustrated examples ofFIGS.11A,11B, and11C, the connections between the wheel assembly mounts (e.g., the wheel assembly mount1010A and the alternative wheel assembly mount1116) and the rail portions (e.g., the rail portion1006and the alternative rail portion1118) of the frame1002are independent of axel connections from a motor in the wheel assembly (e.g., the wheel assembly1008A) to a hub of the vehicle100. FIG.12Aillustrates example wheel assembly mounts coupled to the example vehicle frame1002ofFIG.10for a low ride height of the example vehicle100ofFIG.1. The illustrated example ofFIG.12Aincludes the example wheel assembly mount1010A ofFIGS.10,11A, and11Band the example wheel assembly mount1010C ofFIG.10in the example first position1108for a low ride height. The illustrated example ofFIG.12Aincludes the example wheel assembly1008A and the example wheel assembly1008C coupled to the wheel assembly mount1010A and the wheel assembly mount1010C, respectively. In the illustrated example, the wheel assembly mount1010A and the wheel assembly mount1010C are coupled to the vehicle frame1002via apertures. For example, the first protrusion1104and the second protrusion1106of the wheel assembly mount1010A are inserted through the apertures1012A,1012B, respectively, and the first protrusion1104and the second protrusion1106are welded to the apertures1012A,1012B. InFIG.12A, the wheel assembly mount1010C also includes protrusions inserted in corresponding apertures, however these protrusions and apertures are not illustrated in the perspective view ofFIG.12A. In the illustrated example, the first protrusion1104and the second protrusion1106lie along a first axis that is substantially parallel to a longitudinal axis of the vehicle frame1002. The first protrusion1104and the second protrusion1106extend toward the vehicle frame1002and are positioned in the apertures1012A,1012B in the first position1108to provide a first ride height (e.g., low ride height) of the vehicle frame1002. FIG.12Billustrates example wheel assembly mounts coupled to the example vehicle frame1002ofFIG.10for a high ride height of the example vehicle100ofFIG.1. The illustrated example ofFIG.12Bincludes the example wheel assembly mount1010A and the example wheel assembly mount1010C in the example second position1110for a high ride height. The illustrated example ofFIG.12Bincludes the example wheel assembly1008A and the example wheel assembly1008C coupled to the wheel assembly mount1010A and the wheel assembly mount1010C, respectively. In the illustrated example, the wheel assembly mount1010A and the wheel assembly mount1010C are coupled to the vehicle frame1002via apertures. For example, the first protrusion1104of the wheel assembly mount1010A is inserted through the aperture1012B, and the first protrusion1104and the aperture1012B are welded together to couple the wheel assembly mount1010A to the rail portion1006of the vehicle frame1002. In the illustrated example, the first protrusion1104and the second protrusion1106(not visible in the perspective view ofFIG.12B) lie along a second axis that is substantially perpendicular to a longitudinal axis of the vehicle frame1002. The first protrusion1104and the second protrusion1106extend toward the vehicle frame1002, and the first protrusion1104is positioned in the aperture1012B (the aperture1012A is left empty) in the second position1110to provide a second ride height (e.g., high ride height) of the vehicle frame1002. In the illustrated example, the first protrusion1104and the aperture1012B are coupled in the second position1110to prevent rotation of the wheel assembly mount1010A. However, the vehicle frame1002can include any number of apertures to be used to couple the wheel assembly mount1010A to the vehicle frame1002in the second position1110. For example, the vehicle frame1002can include an additional aperture that is aligned with a longitudinal axis of the aperture1012B, and the second protrusion1106can be positioned in the additional aperture in the second position1110. InFIG.12B, the wheel assembly mount1010C also includes protrusions where one protrusion is inserted in a corresponding aperture (leaving the aperture1202empty) or more than one protrusion is inserted in more than one corresponding aperture, however these protrusions and aperture(s) are not illustrated in the perspective view ofFIG.12B. FIG.13Aillustrates the example vehicle frame1002ofFIG.10as configured using the example wheel assembly mount1010C ofFIG.12Bfor a high ride height of the example vehicle100ofFIG.1. The illustrated example ofFIG.13Aincludes the example wheel assembly mount1010C and the example wheel assembly mount1010D in the second position1110for a high ride height, as illustrated inFIG.12B. The illustrated example ofFIG.13Aincludes the example wheel assembly1008C and the example wheel assembly1008D coupled to the wheel assembly mount1010C and the wheel assembly mount1010D, respectively. In the illustrated example, one aperture is used for coupling the wheel assembly mount1010C and the wheel assembly mount1010D to the vehicle frame1002(e.g., the example aperture1202and an example aperture1304are empty). However, in some examples, any number of apertures can be used for coupling the wheel assembly mount1010C and the wheel assembly mount1010D to the vehicle frame1002in the second position1110. The illustrated example ofFIG.13Aincludes an example first distance1302that illustrates the high ride height achieved by having the wheel assembly mount1010C and the wheel assembly mount1010D in the second position1110. The first distance1302illustrates the distance between the base of the vehicle frame1002and the ground at the second position1110. FIG.13Billustrates the example vehicle frame1002ofFIG.10as configured using the example wheel assembly mount1010C ofFIG.12Afor a low ride height of the example vehicle ofFIG.1. The illustrated example ofFIG.13Bincludes the example wheel assembly mount1010C and the example wheel assembly mount1010D in the first position1108for a high ride height, as illustrated inFIG.12A. The illustrated example ofFIG.13Bincludes the example wheel assembly1008C and the example wheel assembly1008D coupled to the wheel assembly mount1010C and the wheel assembly mount1010D, respectively. In the illustrated example, both corresponding apertures are used for coupling the wheel assembly mount1010C and the wheel assembly mount1010D to the vehicle frame1002(no apertures are visible in the perspective view ofFIG.13B). The illustrated example ofFIG.13Bincludes an example second distance1306that illustrates the low ride height achieved by having the wheel assembly mount1010C and the wheel assembly mount1010D in the first position1108. The second distance1306illustrates the distance between the base of the vehicle frame1002and the ground at the first position1108. In the illustrated examples ofFIGS.13A and13B, the first distance1302is greater than the second distance1306. FIG.14is a flowchart representative of an example method1400to configure a ride height of a vehicle using the example wheel assembly mounts1010A,1010B,1010C,1010D ofFIGS.10,11A,11B,12A,12B,13A and/or13B. The example method1400begins at block1402at which the example wheel assembly mount (e.g., the wheel assembly mounts1010A,1010B,1010C,1010D) is oriented for a selected ride height. The wheel assembly mounts1010A,1010B,1010C,1010D include protrusions (e.g., the example first protrusion1104and/or the example second protrusion1106) that extend toward the vehicle frame. In some examples, the protrusions of the wheel assembly mounts1010A,1010B,1010C,1010D are oriented for the selected ride height. In examples disclosed herein, the selected ride height can be a first ride height of the vehicle frame (e.g., a low ride height) or a second ride height of the vehicle frame (e.g., a high ride height). At block1404, example protrusion(s) (e.g., the example first protrusion1104and/or the example second protrusion1106) of the example wheel assembly mount (e.g., the wheel assembly mounts1010A,1010B,1010C,1010D) are aligned with corresponding aperture(s) (e.g., the example apertures1012A,1012B) in the vehicle frame1002. In some examples, the protrusion(s) (e.g., the first protrusion1104and/or the second protrusion1106) are aligned with apertures adjacent to each of a plurality of wheel assembly locations on the vehicle frame1002(e.g., the apertures1012A,1012B). The protrusion(s) (e.g., the first protrusion1104and/or the second protrusion1106) are positionable in the apertures (e.g., the apertures1012A,1012B) in a position (e.g., the first position1108or the second position1110) to provide the selected ride height of the vehicle frame1002. For example, the first protrusion1104and the second protrusion1106are aligned with the corresponding apertures1012A,1012B in the first position1108to provide the first ride height (low ride height), and the first protrusion1104is aligned with the corresponding aperture1012B in the second position1110to provide the second ride height (high ride height), as illustrated inFIGS.12A,12B,13A, and13B. For the first position1108, the protrusion(s) (e.g., the example first protrusion1104and/or the example second protrusion1106) are aligned with the aperture(s) (e.g., the apertures1012A,1012B) along a first axis that is substantially parallel to a longitudinal axis of the vehicle frame1002. For the second position1110, the protrusion(s) (e.g., the example first protrusion1104and/or the example second protrusion1106) are aligned with the aperture(s) (e.g., the apertures1012A,1012B) along a second axis that is substantially perpendicular to the longitudinal axis of the vehicle frame1002. At block1406, the example protrusion(s) (e.g., the example first protrusion1104and/or the example second protrusion1106) are coupled to the aperture(s) (e.g., the example apertures1012A,1012B). The wheel assembly mounts1010A,1010B,1010C,1010D are coupled to the vehicle frame1002via the coupling of the protrusion(s) (e.g., the first protrusion1104and/or the second protrusion1106) and the aperture(s) (e.g., the apertures1012A,1012B). FIG.15illustrates an example chassis1500having example electric motorized wheel assemblies1508A,1508B,1508C,1508D in accordance with the teachings of this disclosure. The example vehicle chassis1500ofFIG.15includes an example vehicle frame1502, example battery packs1504, an example center subframe1506, and the example wheel assemblies1508A,1508B,1508C,1508D. In the illustrated example ofFIG.15, the wheel assemblies1508A,1508B,1508C,1508D are coupled to the center subframe1506of the vehicle frame1502. In examples disclosed herein, each of the wheel assemblies1508A,1508B,1508C,1508D includes a wheel, an electric motor, a suspension assembly, and a frame mounting interface, which are discussed in further detail below in connection withFIG.16A. The wheel assemblies1508A,1508B,1508C,1508D are couplable to the center subframe1506via the frame mounting interface. In the illustrated example, the vehicle frame1502includes the battery packs1504. In examples disclosed herein, the battery packs1504power the electric motor of each of the wheel assemblies1508A,1508B,1508C,1508D. In the illustrated example, the wheel, the electric motor, the suspension assembly, and the frame mounting interface of the wheel assemblies1508A,1508B,1508C,1508D are interchangeable for different configurations (e.g., size, geometry, etc.). In the illustrated example, the swappable or interchangeable components (e.g., the wheel, the electric motor, the suspension assembly, and the frame mounting interface) of the wheel assemblies1508A,1508B,1508C,1508D have common attachment and packaging strategies, which allows ride and performance needs to be met for the vehicle100while reducing the number of parts and complexity of manufacturing. FIG.16Aillustrates the example wheel assembly1508A ofFIG.15configured for the example vehicle100ofFIG.1. The example wheel assembly1508A ofFIG.16Aincludes an example wheel1602, an example suspension assembly1604, an example electric motor1606, and an example frame mounting interface1608. In the illustrated example, the wheel1602, the suspension assembly1604, the electric motor1606, and the frame mounting interface1608are interchangeable with other suspension assemblies, electric motors, and frame mounting interfaces, respectively. For example, each of the wheel1602, the suspension assembly1604, the electric motor1606, and the frame mounting interface1608are variable in size and/or geometry. For example, the electric motor1606can be interchanged with different sized electric motors, the geometry of the suspension assembly1604can be changed to adjust ride height for the vehicle100, the dampening in the suspension assembly1604can be changed for different terrain, etc. In the illustrated example, the wheel assembly1508A is configured to easily switch out the components (the wheel1602, the suspension assembly1604, the electric motor1606, and the frame mounting interface1608) to allow for customization of the vehicle100to meet performance needs and ride quality expectations. In the illustrated example ofFIG.16A, the wheel1602, the suspension assembly1604, the electric motor1606, and the frame mounting interface1608are coupled in the wheel assembly1508A. In some examples, the electric motor1606is operatively coupled to the wheel1602. In such examples, the operation of the electric motor1606causes rotation of the wheel1602. In the illustrated example, the wheel assembly1508A (including the wheel1602, the suspension assembly1604, and the electric motor1606) is connected to the center subframe1506via the frame mounting interface1608, which is described in further detail below in connection withFIG.16B. In the illustrated example, the frame mounting interface1608is illustrated as a beam. However, the frame mounting interface1608can be implemented as a bar, a plate, a bracket, etc. In some examples, the frame mounting interface1608includes mounting points for suspension links and dampers in the wheel assembly1508A (not visible in the illustrated example ofFIG.16A). FIG.16Billustrates the example wheel assembly1508B ofFIG.15coupled to the example vehicle frame1502ofFIG.15. The example wheel assembly1508B ofFIG.16Bincludes an example wheel1610, an example suspension assembly1612, an example electric motor1614, and an example frame mounting interface1616. In examples disclosed herein, the wheel1610, the suspension assembly1612, the electric motor1614, and the frame mounting interface1616are the same as the wheel1602, the suspension assembly1604, the electric motor1606, and the frame mounting interface1608ofFIG.16A. In the illustrated example ofFIG.16B, the wheel assembly1508B (including the wheel1610, the suspension assembly1612, and the electric motor1614) is connected to the center subframe1506via the frame mounting interface1616. In the illustrated example ofFIG.16B, the frame mounting interface1616is coupled to the center subframe1506of the vehicle frame1502by aligning the frame mounting interface1616on an example top surface1618of the center subframe1506. In some examples, the frame mounting interface1616is coupled to the center subframe1506of the vehicle frame1502via welding, bolts, etc. In the illustrated example, the wheel assembly1508B is connected to the center subframe1506via the frame mounting interface1616to allow for variability in size, geometry, etc. between the vehicle frame1502and the components of the wheel assembly1508B (the wheel1610, the suspension assembly1612, the electric motor1614, and the frame mounting interface1616) without the need for traditional axle connections from the center of the wheel assembly1508B containing the electric motor1614. FIG.17is a flowchart representative of an example method1700to configure the example wheel assemblies1508A,1508B,1508C,1508D ofFIGS.15,16A, and/or16B. The example method1700begins at block1702at which the example wheel assembly components are selected. In examples disclosed in, the wheel assembly components include a wheel (e.g., the example wheel1602and the example wheel1610), an electric motor (e.g., the example electric motor1606and the example electric motor1614), a suspension assembly (e.g., the example suspension assembly1604and the example suspension assembly1612), and the frame mounting interface (e.g., the example frame mounting interface1608and the example frame mounting interface1616). In some examples, the wheel assembly components are interchangeable in the wheel assembly (e.g., the wheel assemblies1508A,1508B,1508C,1508D). In some examples, each of the wheel (e.g., the example wheels1602,1610), the electric motor (e.g., the example electric motors1606,1614), the suspension assembly (e.g., the example suspension assemblies1604,1612), and the frame mounting interface (e.g., the example frame mounting interfaces1608,1616) are variable in size and/or geometry. In some examples, the operator of the vehicle100can selects the different components for the wheel assembly (e.g., the wheel assemblies1508A,1508B,1508C,1508D) to meet performance and ride requirements. At block1704, the example frame mounting interface (e.g., the example frame mounting interfaces1608,1616) of the wheel assembly (e.g., the wheel assemblies1508A,1508B,1508C,1508D) is aligned with the center subframe (e.g., the center subframe1506) of the vehicle100. In some examples, the frame mounting interface (e.g., the example frame mounting interfaces1608,1616) is aligned on a top surface (e.g., the example top surface1618) of the center subframe1506of the vehicle frame1502. At block1706, the example frame mounting interface (e.g., the example frame mounting interfaces1608,1616) of the wheel assembly (e.g., the wheel assemblies1508A,1508B,1508C,1508D) is coupled to the center subframe (e.g., the center subframe1506) of the vehicle100. In some examples, the frame mounting interface (e.g., the example frame mounting interfaces1608,1616) is coupled to the center subframe1506of the vehicle frame1502via welding, bolts, etc. In some examples, the wheel assembly (e.g. the wheel assemblies1508A,1508B,1508C,1508D) is connected to the center subframe1506via the frame mounting interface (e.g., the example frame mounting interfaces1608,1616) to allow for variability in size, geometry, etc. between the vehicle frame1502and the components of the wheel assembly (e.g., the wheel assemblies1508A,1508B,1508C,1508D) without the need for traditional axle connections from the center of the wheel assembly. FIG.18is an illustration of an example vehicle chassis1800in which the teachings of this disclosure can be implemented. The vehicle chassis1800includes an example first crossmember1801A, an example second crossmember1801B, an example third crossmember1801C, and an example fourth crossmember1801D, an example first side rail1802A, an example second side rail1802B, an example third side rail1802C, and an example fourth side rail1802D. The vehicle chassis1800is generally divided into an example front chassis portion1804, an example rear chassis portion1806, and an example battery platform1808. In the illustrated example ofFIG.18, the front chassis portion1804is coupled to an example first electric motor1810A, an example first suspension assembly1812A, an example second suspension assembly1812B, an example first wheel1814A, and an example second wheel1814B. In the illustrated example ofFIG.18, the rear chassis portion1806is coupled to an example second electric motor1810B, an example third suspension assembly1812C, an example fourth suspension assembly1812D, an example third wheel1814C, and an example fourth wheel1814D. In the illustrated example ofFIG.18, the battery platform1808includes an example central battery array1815, an example first side battery array1816A, and an example second battery array1816B. In the illustrated example ofFIG.18, the vehicle chassis1800includes a perimeter frame. In other examples, the teachings of this disclosure can be applied on any other suitable type of vehicle frame (e.g., a ladder frame, a unibody frame, etc.). The crossmembers1801A,1801B,1801C,1801D extend generally laterally between the driver and passenger sides of the chassis1800. The crossmembers1801A,1801B,1801C,1801D increase the strength of the chassis1800and protect vehicle components (e.g., the electric motors1810A,1810B, the suspension assemblies1812A,1812B,1812C,1812D, etc.). In some examples, the crossmembers1801A,1801B,1801C,1801D include additional features (e.g., bolt holes, weld surfaces, etc.) that enable additional vehicle components to be coupled thereto. In the illustrated example ofFIG.18, the vehicle chassis1800includes four crossmembers (e.g., the crossmembers1801A,1801B,1801C,1801D, etc.). In other examples, the vehicle chassis1800includes a different quantity of crossmembers (e.g., two cross members, four cross members, etc.). The crossmembers1801A,1801B,1801C,1801D can be composed of steel, aluminum, and/or any other suitable material(s). The coupling of the crossmembers1801A,1801B,1801C,1801D within the chassis1800is described in greater detail below in conjunction withFIG.19. The side rails1802A,1802B,1802C,1802D extend longitudinally between the front chassis portion1804and the rear chassis portion1806. In the illustrated example ofFIG.18, the vehicle chassis1800includes four side rails (e.g., the side rails1802A,1802B,1802C,1802D, etc.). In other examples, the vehicle chassis1800includes a different quantity of side rails (e.g., two side rails, four side rails, etc.). The side rails1802A,1802B,1802C,1802D can be composed of steel, aluminum, and/or any other suitable material(s). The coupling of the side rails1802A,1802B,1802C,1802D within the chassis1800is described in greater detail below in conjunction withFIG.19. The crossmembers1801A,1801B,1801C,1801D, and the side rails1802A,1802B,1802C,1802D can be of variable size depending on the type and/or model of the vehicle. For example, longer or shorter crossmembers1801A,1801B,1801C,1801D can be selected to change the lateral size of the vehicle chassis1800. Similarly, longer or shorter side rails1802A,1802B,1802C,1802D can be selected to change the longitudinal size of the vehicle chassis1800. As such, by varying the size of the crossmembers1801A,1801B,1801C,1801D and side rails1802A,1802B,1802C,1802D, the footprint of the vehicle chassis1800can be scaled without changing the other components of the chassis1800, which enables shared components to be utilized on differently sized vehicle chassis. The selection of differently sized crossmembers and side rails is described in greater detail below in conjunction withFIG.19. Example configurations of the vehicle chassis1800using differently sized crossmembers and side rails are described below in conjunction withFIGS.20AandFIG.20B. In other examples, the crossmembers1801A,1801B,1801C,1801D and/or the side rails1802A,1802B,1802C,1802D include features (e.g., slidable rails, telescoping features, etc.) that enable length adjustment (e.g., extension, contraction, etc.) of the crossmembers1801A,1801B,1801C,1801D and/or the side rails1802A,1802B,1802C,1802D. An example vehicle chassis including adjustable crossmembers and adjustable side rails is described below in conjunction withFIG.21. Example structural members including adjustable features are described below in conjunction withFIGS.22A and22B. The front chassis portion1804includes the components of the chassis1800forward of the battery platform1808. The rear chassis portion1806includes the components of the chassis1800rearward of the battery platform1808. The front and rear chassis portions1804,1806can be composed of smaller chassis portions coupled via the crossmembers1801A,1801B,1801C,1801D. An example implementation of the chassis portions1804,1806variable size is described below in conjunction withFIGS.23-24B. The electric motors1810A,1810B are powertrain components that convert electric power provided by the batteries of the battery arrays1815,1816A,1816B into mechanical energy that drives the wheels1814A,1814B,1814C,1814D. In some examples, the parameters of the electric motors1810A,1810B (e.g., horsepower, torque, size, etc.) are chosen based on the configuration of the chassis1800(e.g., the length of the crossmembers1801A,1801B,1801C,1801D and/or the side rails1802A,1802B,1802C,1802D, etc.) and/or the model of the vehicle associated with the chassis1800. In other examples, the electric motors1810A,1810B are absent. In such examples, other powertrain components (e.g., one or more combustion engine(s), etc.) can be coupled between the crossmembers1801A,1801B,1801C,1801D. The batteries of the battery arrays1815,1816A,1816B are EV batteries. The batteries of the battery arrays1815,1816A,1816B provide power to the electric motors1810A,1810B. In other examples, if the vehicle chassis1800is associated with a hybrid vehicle, the batteries of the battery arrays1815,1816A,1816B supplement the power generated by a combustion engine of the vehicle chassis1800. The central battery array1815is disposed between the second side rail1802B and the third side rail1802C. The first side battery array1816A is disposed between the first side rail1802A and the second side rail1802B. The second side battery array1816B is disposed between the third side rail1802C and the fourth side rail1802D. In some examples, additional batteries are disposed within the chassis1800(e.g., in the front chassis portion1804, in the rear chassis portion1806, etc.). In some examples, the side battery arrays1816A,1816B are absent (e.g., in examples with two side rails, etc.). Example chassis configurations including additional batteries are described below in conjunction withFIGS.23,24A, and24B. FIG.19is a perspective view of an example vehicle chassis1900with the different width and length configurations. In the illustrated example ofFIG.19, the front chassis portion1804includes an example right front chassis portion1902and an example left front chassis portion1904. In the illustrated example ofFIG.19, the rear chassis portion1806includes an example right rear chassis portion1906and an example left rear chassis portion1908. In the illustrated example ofFIG.19, the right front chassis portion1902includes an example first longitudinal member1912, an example first flared portion1914, an example first crossmember attachment locator1916, an example second crossmember attachment locator1918, an example first side rail attachment locator1944, and an example second side rail attachment locator1946. In the illustrated example ofFIG.19, the left front chassis portion1904includes an example second longitudinal member1920, an example second flared portion1922, an example third crossmember attachment locator1924, an example fourth crossmember attachment locator1926, an example third side rail attachment locator1948, and an example fourth side rail attachment locator1950. In the illustrated example ofFIG.19, the right rear chassis portion1906includes an example third longitudinal member1928, an example third flared portion1930, an example fifth crossmember attachment locator1931, an example sixth crossmember attachment locator1932, an example fifth side rail attachment locator1952, and an example sixth side rail attachment locator1954. In the illustrated example ofFIG.19, the left rear chassis portion1908includes an example fourth longitudinal member1934, an example fourth flared portion1936, an example seventh crossmember attachment locator1938, an example eighth crossmember attachment locator1940, an example seventh side rail attachment locator1956, and an example eighth side rail attachment locator1958. The chassis portions1902,1904,1906,1908each include a corresponding one of the longitudinal members1912,1920,1928,1934and one of the flared portions1914,1922,1930,1936. The flared portions1914,1922,1930,1936can be fully or partially hollow. In other examples, the flared portions1914,1922,1930,1936are solid parts. In the illustrated example ofFIG.19, the flared portions1914,1922,1930,1936are trapezoidal prisms. In other examples, the flared portions1914,1922,1930,1936can have any other suitable shape (e.g., a forked structure, a conical structure, pyramidal structure, a cylindrical structure, etc.). In the illustrated example ofFIG.19, the flared portions1914,1922are disposed at the respective rearward ends of the longitudinal members1912,1920. In the illustrated example ofFIG.19, the flared portions1930,1936are disposed at the respective forward ends of the longitudinal members1928,1934. In some examples, each of the flared portions1914,1922,1930,1936and the corresponding longitudinal members1912,1920,1928,1934(e.g., the first flared portion1914and the first longitudinal member1912, the second flared portion1922and the second longitudinal member1920, etc.) is a unitary structure. In other examples, the flared portions1914,1922,1930,1936can be coupled to the corresponding longitudinal members1912,1920,1928,1934via any suitable fastening technique(s) (e.g., welds, press-fit, chemical adhesive, one or fasteners, etc.). In some examples, to minimize cost and to simplify manufacturing/assembly, the longitudinal members1912,1920,1928,1934are of the same design and dimensions. Similarly, in some examples, the flared portions1914,1922,1930,1936are of the same design and dimensions. In such examples, the chassis portions1902,1904,1906,1908include the same parts, which reduces the total number of unique parts of the chassis1900. The crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940are features of the chassis portions1902,1904,1906,1908that enable the coupling of the crossmembers1801A,1801B,1801C,1801D. That is, the first crossmember attachment locator1916and the third crossmember attachment locator1924facilitate the coupling of the first crossmember1801A between the first longitudinal member1912of the right front chassis portion1902and the second longitudinal member1920of the left front chassis portion1904. The second crossmember attachment locator1918and the fourth crossmember attachment locator1926facilitate the coupling of the second crossmember1801B between the first longitudinal member1912of the right front chassis portion1902and the second longitudinal member1920of the left front chassis portion1904. The fifth crossmember attachment locator1931and the seventh crossmember attachment locator1938facilitate the coupling of the third crossmember1801C between the third longitudinal member1928of the right rear chassis portion1906and the fourth longitudinal member1934of the left rear chassis portion1908. The sixth crossmember attachment locator1932and the eighth crossmember attachment locator1940facilitate the coupling of the fourth crossmember1801D between the third longitudinal member1928of the right rear chassis portion1906and the fourth longitudinal member1934of the left rear chassis portion1908. The crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940include one or more feature(s) that enable the coupling of the crossmembers1801A,1801B,1801C,1801D therebetween. For example, the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940can include inboard extending protrusions to be coupled within an aperture (e.g., the hollow cross-sections of the crossmembers1801A,1801B,1801C,1801D, etc.) of the corresponding crossmembers1801A,1801B,1801C,1801D. In such examples, the protrusions of the crossmember attachment locators1916,1916,1918,1924,1926,1931,1932,1938,1940may be dimensioned to frictionally engage with the internal surface of the corresponding apertures of the crossmembers1801A,1801B,1801C,1801D. In other examples, the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940include apertures to receive corresponding outboard extending protrusions of the crossmembers1801A,1801B,1801C,1801D. At the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940, the crossmembers1801A,1801B,1801C,1801D can be coupled to the corresponding chassis portions1902,1904,1906,1908via one or more welds. In other examples, the crossmembers1801A,1801B,1801C,1801D can be coupled to the corresponding chassis portions1902,1904,1906,1908via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. Additionally or alternatively, the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940can include a bracket and/or other feature that facilitates the coupling of the crossmembers1801A,1801B,1801C,1801D. In the illustrated example ofFIG.19, the crossmembers1801A,1801B,1801C,1801D are implemented by one of example interchangeable crossmembers1942A,1942B,1942C,1942D. The interchangeable crossmembers1942A,1942B,1942C,1942D are structural members of different lengths. That is, the first interchangeable crossmember1942A is the longest of the interchangeable crossmembers1942A,1942B,1942C,1942D. The second interchangeable crossmember1942B is the second longest of the interchangeable crossmembers1942A,1942B,1942C,1942D. The third interchangeable crossmember1942C is the third longest of the interchangeable crossmembers1942A,1942B,1942C,1942D. The fourth interchangeable crossmember1942D is the shortest of the interchangeable crossmembers1942A,1942B,1942C,1942D. Depending on which of the interchangeable crossmembers1942A,1942B,1942C,1942D implements the crossmembers1801A,1801B,1801C,1801D, the width of the chassis1900can be changed. As such, the chassis1900supports various width configurations with only the changing of the crossmembers1801A,1801B,1801C,1801D. Two example configurations of the chassis1900illustrating the use of the first interchangeable crossmember1942A and the fourth interchangeable crossmember1942D are described below in conjunction withFIGS.20A and20B, respectively. The side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958are features of the chassis portions1902,1904,1906,1908that enable the coupling of the side rails1802A,1802B,1802C,1802D. That is, the first side rail attachment locator1944and the fifth side rail attachment locator1952facilitate the coupling of the first side rail180A between the first flared portion1914of the right front chassis portion1902and the third flared portion1930of the right rear chassis portion1906. The second side rail attachment locator1946and the sixth side rail attachment locator1954facilitate the coupling of the second side rail1802B between the first flared portion1914of the right front chassis portion1902and the third flared portion1930of the right rear chassis portion1906. The third side rail attachment locator1948and the seventh side rail attachment locator1956facilitate the coupling of the third side rail1802C between the second flared portion1922of the left front chassis portion1904and the fourth flared portion1936of the left rear chassis portion1908. The fourth side rail attachment locator1950and the eighth side rail attachment locator1958facilitate the coupling of the fourth side rail1802D between the second flared portion1922of the left front chassis portion1904and the fourth flared portion1936of the left rear chassis portion1908. The side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958include one or more feature(s) that enable the coupling of the side rails1802A,1802B,1802C,1802D therebetween. For example, the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958can include protrusions to be coupled within corresponding apertures (e.g., the hollow cross-sections of the side rails1802A,1802B,1802C,1802D, etc.) of the corresponding side rails1802A,1802B,1802C,1802D. In such examples, the protrusions of the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958may be dimensioned to frictionally engage with the internal surface of the corresponding apertures of the side rails1802A,1802B,1802C,1802D. In other examples, the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958include apertures to receive a corresponding protrusion of the side rails1802A,1802B,1802C,1802D. At the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958, the side rails1802A,1802B,1802C,1802D can be coupled to the corresponding chassis portions1902,1904,1906,1908via one or more welds. In other examples, the side rails1802A,1802B,1802C,1802D are coupled to the corresponding chassis portions1902,1904,1906,1908via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. Additionally or alternatively, the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958can include a bracket and/or other feature that facilitates the coupling of the side rails1802A,1802B,1802C,1802D. In the illustrated example ofFIG.19, the side rails1802A,1802B,1802C,1802D are implemented by one of example interchangeable side rails1960A,1960B,1960C,1960D. The interchangeable side rails1960A,1960B,1960C,1960D are structural members of different lengths. That is, the first interchangeable side rail1960A is the longest of the interchangeable side rails1960A,1960B,1960C,1960D. The second interchangeable side rail1960B is the second longest of the interchangeable side rails1960A,1960B,1960C,1960D. The third interchangeable side rail1960C is the third longest of the interchangeable side rails1960A,1960B,1960C,1960D. The fourth interchangeable side rail1960D is the shortest of the interchangeable side rails1960A,1960B,1960C,1960D. Depending on which of the interchangeable side rails1960A,1960B,1960C,1960D implements the side rails1802A,1802B,1802C,1802D, the length of the chassis1900can be changed. As such, the chassis1900supports various length configurations with only the changing of the side rails1802A,1802B,1802C,1802D. Two example configurations of the chassis1800illustrating the use of the first interchangeable side rail1960A and the fourth interchangeable side rail1960D are described below in conjunction withFIGS.20A and20B, respectively. FIG.20Ais a top view of an example first configuration2000of the chassis1900. In the illustrated example ofFIG.20A, the first configuration2000of the chassis1900includes the first interchangeable crossmember1942A implementing each of the crossmembers1801A,1801B,1801C,1801D and the first interchangeable side rail1960A implementing each of the side rails1802A,1802B,1802C,1802D. In the illustrated example ofFIG.20A, the chassis1800has a comparatively larger width and length, which makes the first configuration2000suitable for larger vehicles (e.g., SUVs, pick-up trucks, etc.). FIG.20Bis a top view of an example second configuration2002of the chassis1900. In the illustrated example ofFIG.20B, the second configuration2002of the chassis1800includes the fourth interchangeable crossmember1942D implementing each of the crossmembers1801A,1801B,1801C,1801D and the fourth interchangeable side rail1960D implementing each of the side rails1802A,1802B,1802C,1802D. In the illustrated example ofFIG.20B, the chassis1800has a comparatively smaller footprint, which makes the first configuration2000suitable for smaller vehicles (e.g., compact vehicles, crossovers, etc.). In some examples, the battery arrays1815,1816A,1816B have different numbers of batteries in different configurations of the vehicle chassis1900. In the illustrated examples ofFIGS.20A and20B, the first configuration2000of the vehicle chassis1900includes a comparatively greater number of batteries than the second configuration2002of the vehicle chassis1900. While the configurations2000,2002ofFIGS.20A and20Binclude particular combinations of the interchangeable crossmembers1942A,1942B,1942C,1942D and the interchangeable side rails1960A,1960B,1960C,1960D (e.g., the comparatively short first interchangeable crossmember1942A and the comparatively smaller first side rail1960A of FIG.20A, the comparatively long fourth crossmember1942D, and the comparatively longer fourth side rail1960D ofFIG.20B, etc.), any combination of the interchangeable crossmembers1942A,1942B,1942C,1942D and the interchangeable side rails1960A,1960B,1960C,1960D can be used with the vehicle chassis1900. For example, example configurations of the vehicle chassis1900include a comparatively longer interchangeable crossmember (e.g., the interchangeable crossmember1942A,1942B, etc.) and a comparatively smaller interchangeable side rail (e.g., the interchangeable side rail1960C,1960D, etc.) and vice versa. FIGS.21-24depict alternative vehicle chassis that may be used to implement the teachings of this disclosure that are similar to those described with referenceFIGS.18-20. When the same element number is used in connection withFIGS.21-24as used inFIGS.18-20, it has the same meaning unless indicated otherwise. FIG.21is a perspective view of an alternative vehicle chassis2100including adjustable crossmember(s)2102and side rail(s)2104. In the illustrated example ofFIG.21, the vehicle chassis2100includes the right front chassis portion1902, the left front chassis portion1904, the right rear chassis portion1906, and the left rear chassis portion1908. In the illustrated example ofFIG.21, the right front chassis portion1902includes the example first longitudinal member1912, the example first flared portion1914, the example first crossmember attachment locator1916, the example second crossmember attachment locator1918, the example first side rail attachment locator1944, and the example second side rail attachment locator1946. In the illustrated example ofFIG.21, the left front chassis portion1904includes the example second longitudinal member1920, the example second flared portion1922, the example third crossmember attachment locator1924, the example fourth crossmember attachment locator1926, the example third side rail attachment locator1948, and the example fourth side rail attachment locator1950. In the illustrated example ofFIG.21, the right rear chassis portion1906includes the example third longitudinal member1928, the example third flared portion1930, the example fifth crossmember attachment locator1931, the example sixth crossmember attachment locator1932, the example fifth side rail attachment locator1952, and the example sixth side rail attachment locator1954. In the illustrated example ofFIG.19, the left rear chassis portion1908includes the example fourth longitudinal member1934, the example fourth flared portion1936, the example seventh crossmember attachment locator1938, the example eighth crossmember attachment locator1940, the example seventh side rail attachment locator1956, and the example eighth side rail attachment locator1958. The adjustable structural member that can be used to implement the adjustable crossmember(s)2102and/or the adjustable side rail(s)2104is described below in conjunction withFIG.22A. An alternative adjustable structural member that can be used to implement the adjustable crossmember(s)2102and/or the adjustable side rail(s)2104is described in detail below in conjunction withFIG.22B. In the illustrated example ofFIG.21, the crossmembers1801A,1801B,1801C,1801D can be implemented by the adjustable crossmember2102. The adjustable crossmember(s)2102are structural members with variably adjustable lengths. For example, the adjustable crossmember(s)2102can be adjusted to the desired length during the assembly of the chassis2100. In some examples, the desired length of the adjustable crossmembers(s)2102is determined based on the model associated with the chassis2100and/or the desired total width of the chassis2100. That is, depending on the adjusted length of the adjustable crossmember(s)2102, the width of the chassis2100can be changed. As such, the chassis2100supports various width configurations based only on the adjustment of the adjustable crossmember(s)2102. In some examples, the adjustable crossmember(s)2102include one or more feature(s) that enable the adjustable crossmember(s)2102to be coupled to the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940. For example, the adjustable crossmember(s)2102can include apertures (e.g., a hollow cross-section, etc.) to receive corresponding protrusions of the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940. In other examples, the adjustable crossmember(s)2102includes protrusions to be received by corresponding apertures of the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940. At the crossmember attachment locators1916,1918,1924,1926,1931,1932,1938,1940, the adjustable crossmember(s)2102can be coupled to the corresponding chassis portions1902,1904,1906,1908via one or more welds. In other examples, the adjustable crossmember(s)2102are coupled to the corresponding chassis portions1902,1904,1906,1908via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In the illustrated example ofFIG.21, the side rails1802A,1802B,1802C,1802D are implemented by the example adjustable side rail(s)2104. The adjustable side rail(s)2104are structural members with variable lengths. For example, the adjustable side rail(s)2104can be adjusted to the desired length during the assembly of the chassis2100. In some examples, the desired length of the adjustable side rail(s)2104is determined based on the model associated with the chassis2100and/or the desired total length of the chassis2100. That is, depending on the adjusted length of the adjustable side rail(s)2104, the length of the chassis2100can be changed. As such, the chassis2100supports various length configurations based only on the adjustment of the adjustable side rail(s)2104. In some examples, the adjustable side rail(s)2104include one or more feature(s) that enable the adjustable side rail(s)2104to be coupled to the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958. For example, the adjustable side rail(s)2104can include apertures (e.g., a hollow cross-section, etc.) to receive corresponding protrusions of the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958. In other examples, the adjustable side rail(s)2104can include protrusions to be received by corresponding apertures of the side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958. At side rail attachment locators1944,1946,1948,1950,1952,1954,1956,1958, the adjustable side rail(s)2104can be coupled to the corresponding chassis portions1902,1904,1906,1908via one or more welds. In other examples, the adjustable side rail(s)2104can be coupled to the corresponding chassis portions1902,1904,1906,1908via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. FIG.22Ais a perspective view of the adjustable structural member2102ofFIG.21. In the illustrated example, the adjustable structural member2102includes an example inner member2202, an example first outer sleeve2204A, and an example second outer sleeve2204B. The inner member2202has an example first end2205A and an example second end2205B. The adjustable structural member2102is a telescoping structural member. In the illustrated example ofFIG.22A, the outer sleeves2204A,2204B (e.g., telescoping features, etc.) that slide relative to the inner member2202such that the adjustable structural member2102can be adjusted to the desired length. In the illustrated example ofFIG.22A, the adjustable structural member2102is configured to have a relatively longer length by adjusting the outer sleeves2204A,2204B such that a portion of the outer sleeves2204A,2204B extends past the ends2205A,2206A, respectively. In other examples, the adjustable structural member2102can be configured to have a relatively shorter length by adjusting the outer sleeves2204A,2204B such that the first outer sleeve2204A abuts the second outer sleeve2204B. That is, the total length of the adjustable structural member2102can be adjusted by positioning the outer sleeves2204A,2204B. In some examples, after the outer sleeves2204A,2204B have been positioned to achieve the desired length of the adjustable structural member2102, the outer sleeves2204A,2204B can be permanently fixed relative to the inner member2202via welds and/or another suitable fastening techniques. In other examples, the outer sleeves2204A,2204B can be removably fixed relative to the inner member2202via a chemical adhesive, a fastener, and/or another suitable example. In some such examples, the adjustable structural member2200can be readjusted to have a different length (e.g., for use on a different chassis, etc.). In some examples, the inner member2202and/or the outer sleeves2204A,2204B can include features (not illustrated) that facilitate fixing the structural member2200at the desired length. or more apertures to receive one or more fasteners (e.g., bolts, pins, screws, etc.). FIG.22Bis a perspective view of an example alternative adjustable structural member2206that can be used with the alternative vehicle chassis2100ofFIG.21. In the illustrated example ofFIG.22B, the adjustable structural member2206includes an example first inner rail2208A, an example second inner rail2208B, an example first outer rail2210A, and an example second outer rail2210B. In the illustrated example ofFIG.22B, the first inner rail2208A includes an example first track2212A and an example second track2212B. In the illustrated example ofFIG.22B, the second inner rail2208B includes an example third track2212C and an example fourth track2212D. In the illustrated example ofFIG.22B, the outer rails2210A,2210B include an example first boss2214A and an example second boss2214B, respectively. In the illustrated example ofFIG.22B, the inner rails2208A,2208B have an example first inner end2216A and an example second inner end2216B, respectively. In the illustrated example ofFIG.22B, the outer rails2210A,2210B define an example first outer end2218A and an example second outer end2218B. The adjustable structural member2206is a slidably adjustable structural member. The length of the adjustable structural member2206can be adjusted by changing the position of the inner rails2208A,2208B relative to the outer rails2210A,2210B (e.g., slidably adjustable features, etc.). For example, the bosses2214A,2214B can slide within the corresponding tracks2212A,2212B,2212C,2212D (e.g., the first boss2214A within the first track2212A and the third track2212C, the second boss2214B within the second track2212B and the fourth track2212D, etc.). For example, the adjustable structural member2206can be adjusted to have a relatively shorter length by adjusting the rails2208A,2208B,2210A,2210B such that the first inner end2216A of the first inner rail2208A abuts the second inner end2216B of the second inner rail2208B. The adjustable structural member2206can be adjusted to have a relatively longer length by adjusting the rails2208A,2208B,2210A,2210B such that the first inner end2216A of the first inner rail2208A is proximate to the outer end2218A and the second inner end2216B of the second inner rail2208B is proximate to the second outer end2218B. That is, the total length of the adjustable structural member2206can be adjusted by positioning the rails2208A,2208B,2210A,2210B. In some examples, after the rails2208A,2208B,2210A,2210B have been positioned to achieve the desired length of the adjustable structural member2206, the relative positions of the rails2208A,2208B,2210A,2210B can be permanently fixed via welds and/or another suitable fastening techniques. For example, the bosses2214A,2214B can welded within the corresponding tracks2212A,2212B,2212C,2212D at the desired location. In other examples, the relative positions of the rails2208A,2208B,2210A,2210B can be removably fixed via a chemical adhesive, a fastener, and/or another suitable example. In some such examples, the adjustable structural member2206can be readjusted to have a different length (e.g., for use on a differently sized chassis, etc.). In some examples, some or all of the rails2208A,2208B,2210A,2210B include features (not illustrated) that facilitate fixing the adjustable structural member2206at the desired length. In some such examples, some or all of the rails2208A,2208B,2210A,2210B include one or more apertures to receive one or more fasteners (e.g., bolts, pins, screws, etc.). FIG.23is a perspective view of an example second alternative vehicle chassis2300. The example vehicle chassis2300includes an example battery platform2302. The battery platform2302can be coupled to one of an example first interchangeable front chassis portion2304A or an example second interchangeable front chassis portion2304B. The battery platform2302can be coupled to one of an example first interchangeable rear chassis portion2306A or an example second interchangeable rear chassis portion2306B. The example first interchangeable front chassis portion2304A includes example first attachment locators2308, example first crossmembers2310, and example first longitudinal members2311. The example second interchangeable front chassis portion2304B includes example first attachment locators2312, an example frame section2314, an example battery array2316, example second crossmembers2318, and example second longitudinal members2319. The example first interchangeable rear chassis portion2306A includes example third attachment locators2320, example third crossmembers2322, and example third longitudinal members2323. The example second interchangeable rear chassis portion2306B includes example fourth attachment locators2324, an example second frame section2326, an example second battery array2328, example fourth crossmembers2330, and example fourth longitudinal members2331. The example battery platform2302includes example fifth attachment locators2332and example sixth attachment locators2334. In the illustrated example ofFIG.23, the interchangeable front chassis portions2304A,2304B include the example first electric motor1810A, the example first suspension assembly1812A, the example second suspension assembly1812B, the example first wheel1814A, and the example second wheel1814B. In the illustrated example ofFIG.23, the interchangeable rear chassis portions2306A,2306B include the example second electric motor1810B, the example third suspension assembly1812C, the example fourth suspension assembly1812D, the example third wheel1814C, and the example fourth wheel1814D. The battery platform2302is a common component shared between different configurations of the chassis2300. The example platform2302includes a plurality of structural members (e.g., crossmembers, side rails, etc.) and EV batteries. The fifth attachment locators2332can be coupled to the corresponding first attachment locators2308of the first interchangeable front chassis portion2304A or the corresponding second attachment locators2312of the second interchangeable front chassis portion2304B. The sixth attachment locators2334can be coupled to the corresponding third attachment locators2320of the first interchangeable rear chassis portion2306A or the corresponding fourth attachment locators2324of the second interchangeable rear chassis portion2306B. In the illustrated example ofFIG.23, the attachment locators2308,2312,2320,2324include protrusions to be received by corresponding apertures of the attachment locators2332,2334of the battery platform2302. In other examples, the attachment locators2332,2334of the battery platform2302include protrusions to be received by the attachment locators2308,2312,2320,2324. Additionally or alternatively, the battery platform2302can be coupled to a corresponding one of the interchangeable front chassis portions2304A,2304B and a corresponding one of the interchangeable rear chassis portions2306A,2306B via additional fastening techniques (e.g., welds, press-fits, chemical adhesives, fasteners, etc.). The second interchangeable front chassis portion2304B has a comparatively greater width and comparatively greater length than the first interchangeable front chassis portion2304A. In the illustrated example ofFIG.23, the structural members of the second interchangeable front chassis portion2304B (e.g., the crossmembers2318, the longitudinal members2319, etc.) are longer than the structural members of the first interchangeable front chassis portion2304A. In the illustrated example ofFIG.23, the second interchangeable front chassis portion2304B includes the first frame section2314, which further contributes to the greater length of the second interchangeable front chassis portion2304B compared to the first interchangeable front chassis portion2304A. In other examples, the first frame section2314is absent. The second interchangeable rear chassis portion2306B has a comparatively greater width and comparatively greater length than the first interchangeable rear chassis portion2306A. In the illustrated example ofFIG.23, the structural members of the second interchangeable rear chassis portion2306B (e.g., the crossmembers2322, the longitudinal members2323, etc.) are longer than the structural members of the first interchangeable rear chassis portion2306A. In the illustrated example ofFIG.23, the second interchangeable front chassis portion2304B includes the second frame section2326, which further contributes to the greater length of the second interchangeable front chassis portion2304B compared to the first interchangeable rear chassis portion2306A. In other examples, the second frame section2314is absent. Depending on which of the interchangeable front chassis portions2304A,2304B is coupled to battery platform2302and which of the interchangeable rear chassis portions2306A,2306B is coupled to battery platform2302, the width and the length of the chassis2300can be changed. While only two sizes of chassis portions are depicted inFIG.23, the width and length of the interchangeable chassis assemblies can be designed and manufactured based on the desired width and length of the chassis2300. As such, the chassis2300supports various width and length configurations depending on which of the interchangeable chassis portions2304A,2304B,2306A,2306B is utilized. Two example configurations of the chassis2300are described below in conjunction withFIGS.24A and24B. FIG.24Ais a top view of an example first configuration2400of the chassis2300ofFIG.23including the relatively smaller interchangeable chassis portions2304A,2306A. In the illustrated example ofFIG.24A, the first configuration2400includes the battery platform2302, the first interchangeable front chassis portion2304A, and the first interchangeable rear chassis portion2306A. In the illustrated example ofFIG.24A, the chassis2300has a comparatively small footprint, which makes the first configuration2400suitable for smaller vehicles (e.g., compact vehicles, crossovers, etc.). FIG.24Bis a top view of an example second configuration2402of the chassis2300ofFIG.23including the relatively larger interchangeable chassis portions2304B,2306B. In the illustrated example ofFIG.24B, the first configuration2402includes the battery platform2302, the second interchangeable front chassis portion2304B, and the second interchangeable rear chassis portion2306B. In the illustrated example ofFIG.24B, the chassis2300has a comparatively larger footprint, which makes the first configuration2402suitable for larger vehicles (e.g., SUVs, pick-up, etc.). The larger platform of second configuration2402enables an additional example first side battery array2404A and an additional example second side battery array2404B. In the illustrated examples ofFIGS.24A and24B, the second configuration2402includes additional batteries disposed with the frame sections2314,2326and within the battery arrays2316,2328,2404A,2404B. That is, comparatively larger configurations (e.g., the second configuration2402, etc.) of the chassis2300enable more batteries to be coupled within the chassis2300than comparatively smaller configurations (e.g., the first configuration2400, etc.). FIG.25is a flowchart representative of an example method2500to assemble the example chassis1900,2100ofFIGS.19and21, respectively. The example method2500begins at block2502, the model of the vehicle associated with the chassis1900,2100is determined. For example, the model of the vehicle can be determined to be a pick-up truck model, a compact car model, an SUV model, a crossover model, a van model, etc. In some examples, the footprint associated with the determined model is determined. At block2504, the right front chassis portion1902is assembled. For example, the first longitudinal member1912(e.g., including the crossmember attachment locators1916,1918, etc.) and the first flared portion1914are coupled to form the right front chassis portion1902. In some examples, the first longitudinal member1912and the first flared portion1914are coupled together via one or more welds. In other examples, the first longitudinal member1912and the first flared portion1914can be coupled together via any other suitable fastening techniques (e.g., press-fit, a chemical adhesive, etc.). In some examples, the first crossmember attachment locator1916and the second crossmember attachment locator1918are formed on the first longitudinal member1912(e.g., via machining, the fastening on additional parts, etc.). In some examples, the first wheel1814A and the first suspension assembly1812A are coupled to the first longitudinal member1912and/or the first flared portion1914. In other examples, the first wheel1814A and the first suspension assembly1812A are coupled to the right front chassis portion1902after the frame of the chassis1900,2100is assembled. At block2506, the left front chassis portion1904is assembled. For example, the second longitudinal member1920(e.g., including the crossmember attachment locators1924,1926, etc.) and the second flared portion1922are coupled to form the left front chassis portion1904. In some examples, the second longitudinal member1920and the second flared portion1922are coupled together via one or more welds. In other examples, the second longitudinal member1920and the second flared portion1922can be coupled together via any other suitable fastening techniques (e.g., press-fit, a chemical adhesive, etc.). In some examples, the third crossmember attachment locator1924and the fourth crossmember attachment locator1926are formed on the second longitudinal member1920(e.g., via machining, the fastening on additional parts, etc.). In some examples, the second wheel1814B and the second suspension assembly1812B are coupled to the second longitudinal member1920and/or the second flared portion1922. In other examples, the second wheel1814B and the second suspension assembly1812B are coupled to the left front chassis portion1904after the frame of the chassis1900,2100is assembled. At block2508, the right rear chassis portion1906is assembled. For example, the third longitudinal member1928(e.g., including the crossmember attachment locators1931,1932, etc.) and the third flared portion1930are coupled to form the right rear chassis portion1906. In some examples, the third longitudinal member1928and the third flared portion1930are coupled together via one or more welds. In other examples, the third longitudinal member1928and the third flared portion1930can be coupled together via any other suitable fastening techniques (e.g., press-fit, a chemical adhesive, etc.). In some examples, the fifth crossmember attachment locator1931and the sixth crossmember attachment locator1932are formed on the third longitudinal member1928(e.g., via machining, the fastening on additional parts, etc.). In some examples, the third wheel1814C and the third suspension assembly1812C are coupled to the third longitudinal member1928and/or the third flared portion1930. In other examples, the third wheel1814C and the third suspension assembly1812C are coupled to the right rear chassis portion1906after the frame of the chassis1900,2100is assembled. At block2510, the left rear chassis portion1908is assembled. For example, the fourth longitudinal member1934(e.g., including the crossmember attachment locators1938,1940, etc.) and the fourth flared portion1936are coupled to form the left rear chassis portion1908. In some examples, the fourth longitudinal member1934and the fourth flared portion1936are coupled together via one or more welds. In other examples, the fourth longitudinal member1934and the fourth flared portion1936can be coupled together via any other suitable fastening techniques (e.g., press-fit, a chemical adhesive, etc.). In some examples, the seventh crossmember attachment locator1938and the eighth crossmember attachment locator1940are formed on the fourth longitudinal member1934(e.g., via machining, the fastening on additional parts, etc.). In some examples, the fourth wheel1814D and the fourth suspension assembly1812D are coupled to the fourth longitudinal member1934and/or the fourth flared portion1936. In other examples, the fourth wheel1814D and the fourth suspension assembly1812D are coupled to the left rear chassis portion1908after the frame of the chassis1900,2100is assembled. At block2512, the appropriate crossmembers are selected based on the chassis1900,2100. For example, if the chassis1900is being assembled, an appropriately sized crossmember of the interchangeable crossmembers1942A,1942B,1942C,1942D is selected. For example, if the model of the vehicle is a comparatively larger vehicle, the first interchangeable crossmember1942A or the second interchangeable crossmember1942B can be selected. In other examples, if the model of the vehicle is a smaller vehicle, the third interchangeable crossmember1942C or the fourth interchangeable crossmember1942D can be selected. If the chassis2100is being assembled, the adjustable crossmember(s)2102are selected. At block2514, it is determined if the crossmembers selected are adjustable. For example, if the adjustable crossmember(s)2102is selected, the method2500advances to block2516. If the ones of the interchangeable crossmembers1942A,1942B,1942C,1942D were selected, the method advances to block2518. At block2516, the length of the adjustable crossmember(s)2102is adjusted based on the model of the vehicle. For example, if the adjustable crossmember(s)2102are implemented by the adjustable structural member2200ofFIG.22A, the position of the outer sleeves2204A,2204B relative to the inner rail2202can be adjusted such that the adjustable crossmember(s)2102has the desired length. In other examples, if the adjustable crossmember(s)2102are implemented by the adjustable structural member2206ofFIG.22B, the relative position of the inner rails2208A,2208B, and outer rails2210A,2210B can be adjusted until the adjustable crossmember(s)2102has the desired length. Additionally or alternatively, the length of the adjustable crossmember(s)2102can be adjusted by any other suitable means. At block2518, the front chassis portions1902,1904are coupled together via the selected crossmembers. For example, if the chassis1900is being assembled, the selected one(s) of the interchangeable crossmembers1942A,1942B,1942C,1942D are coupled to the front chassis portions1902,1904via the crossmember attachment locator(s)1916,1918,1924,1926. For example, if the chassis2100is being assembled, the adjustable crossmember(s)2102are coupled to the front chassis portions1902,1904via the crossmember attachment locator(s)1916,1918,1924,1926. In some examples, apertures of the selected crossmembers (e.g., ones of the interchangeable crossmembers1942A,1942B,1942C,1942D, the adjustable crossmember(s)2102, etc.) receive corresponding protrusions of the crossmember attachment locator(s)1916,1918,1924,1926. In such examples, the protrusions of the crossmember attachment locator(s)1916,1918,1924,1926frictionally engage the apertures of the selected crossmembers. Additionally or alternatively, the selected crossmembers can be fixedly attached to the front chassis portions1902,1904via one or more fastening techniques (e.g., welds, fasteners, chemical adhesives, etc.). At block2520, the rear chassis portions1906,1908are coupled together via the crossmembers1801C,1801D. For example, if the chassis1900is being assembled, the selected one(s) of the interchangeable crossmembers1942A,1942B,1942C,1942D are coupled to the rear chassis portions1906,1908via the crossmember attachment locators1931,1932,1938,1940. For example, if the chassis2100is being assembled, the adjustable crossmember(s)2102are coupled to the rear chassis portions1906,1908via the crossmember attachment locators1931,1932,1938,1940. In some examples, apertures of the selected crossmembers (e.g., ones of the interchangeable crossmembers1942A,1942B,1942C,1942D, the adjustable crossmember(s)2102, etc.) receive corresponding protrusions of the crossmember attachment locators1931,1932,1938,1940. In such examples, the protrusions of the crossmember attachment locators1931,1932,1938,1940frictionally engage the apertures of the selected crossmembers. Additionally or alternatively, the selected crossmembers can be fixedly attached to the rear chassis portions1906,1908via one or more fastening techniques (e.g., welds, fasteners, chemical adhesives, etc.). At block2522, the appropriate side rail(s) are selected based on the chassis1900,2100. For example, if the chassis1900is being assembled, appropriately sized side rail(s) of the interchangeable side rails1960A,1960B,1960C,1960D is selected. For example, if the model of the vehicle is a comparatively larger vehicle, the first interchangeable side rail1960A or the second interchangeable side rail1960B can be selected. In other examples, if the model of the vehicle is a smaller vehicle, the third interchangeable side rail1960C or the fourth interchangeable side rail1960D can be selected. If the chassis2100is being assembled, the adjustable side rail(s)2104are selected. At block2524, it is determined if the side rail(s) selected are adjustable. For example, if the adjustable side rails (s)2104is selected, the method2500advances to block2526. If the ones of the interchangeable side rails1960A,1960B,1960C,1960D are selected, the method advances to block2530. At block2526, the length of the adjustable side rail(s)2104is adjusted based on the model of the vehicle. For example, if the adjustable side rail(s)2104are implemented by the adjustable structural member2200ofFIG.22A, the position of the outer sleeves2204A,2204B relative to the inner rail2202can be adjusted such that the adjustable side rail(s)2104have the desired length. In other examples, if the adjustable side rail(s)2104are implemented by the adjustable structural member2206ofFIG.22B, the relative position of the inner rails2208A,2208B, and outer rails2210A,2210B can be adjusted until the adjustable side rail(s)2104have the desired length. Additionally or alternatively, the length of the adjustable side rail(s)2104can be adjusted by any other suitable means. At block2528, the right front chassis portion1902is coupled to the right rear chassis portion1906via the side rails1802A,1802B. For example, if the chassis1900is being assembled, the selected one(s) of the interchangeable side rails1960A,1960B,1960C,1960D are coupled to the right chassis portions1902,1906via the side rail attachment locator(s)1944,1946,1952,1954. For example, if the chassis2100is being assembled, the adjustable side rail(s)2104are coupled to the right chassis portions1902,1906via the side rail attachment locator(s)1944,1946,1952,1954. In some examples, apertures of the selected side rails (e.g., ones of the interchangeable side rails1960A,1960B,1960C,1960D, the adjustable side rail(s)2104, etc.) receive corresponding protrusions of the side rail attachment locator(s)1944,1946,1952,1954. In such examples, the protrusions of the side rail attachment locator(s)1944,1946,1952,1954frictionally engage the apertures of the selected side rails. Additionally or alternatively, the selected side rails can be fixedly attached to the right chassis portions1902,1906via one or more fastening techniques (e.g., welds, fasteners, chemical adhesives, etc.). At block2530, the left front chassis portion1904is coupled to the left rear chassis portion1908via the side rails1802C,1802D. For example, if the chassis1900is being assembled, the selected one(s) of the interchangeable side rails1960A,1960B,1960C,1960D are coupled to the left chassis portions1904,1908via the side rail attachment locator(s)1948,1950,1956,1958. For example, if the chassis2100is being assembled, the adjustable side rail(s)2104are coupled to the left chassis portions1904,1908via the side rail attachment locator(s)1948,1950,1956,1958. In some examples, apertures of the selected side rails (e.g., ones of the interchangeable side rails1960A,1960B,1960C,1960D, the adjustable side rail(s)2104, etc.) receive corresponding protrusions of the side rail attachment locator(s)1948,1950,1956,1958. In such examples, the protrusions of the side rail attachment locator(s)1948,1950,1956,1958frictionally engage the apertures of the selected side rails. Additionally or alternatively, the selected side rails can be fixedly attached to the left chassis portions1904,1908via one or more fastening techniques (e.g., welds, fasteners, chemical adhesives, etc.). The method2500ends. FIG.26is a perspective view of an example chassis2600in which the teachings of this disclosure can be implemented. The example chassis2600includes an example frame2602. In the illustrated example ofFIG.26, the chassis2600includes an example front chassis portion2404, an example rear chassis portion2606, and an example battery platform2608. The example battery platform2608includes an example central battery array2610, an example first side battery array2612A, and a second side battery array2612B. The example chassis2600includes an example first wheel2614A, an example second wheel2614B, an example third wheel2614C, and an example fourth wheel2614D. The example front chassis portion2604includes an example first crossmember2616, an example second crossmember2618, an example first longitudinal member2620, and an example second longitudinal member2622, which collectively define an example first cavity2624. The example rear chassis portion2606includes an example third crossmember2626, an example fourth crossmember2628, an example third longitudinal member2630, and an example fourth longitudinal member2632, which collectively define an example second cavity2634. The battery platform2608includes the battery arrays2610,2612A,2612B. The batteries of the battery arrays2610,2612A,2612B are EV batteries. The batteries of the battery arrays2610,2612A,2612B provide power to electric motors coupled to the chassis2600. In other examples, if the chassis2600is associated with a hybrid vehicle, the batteries of the battery arrays2610,2612A,2612B supplement the power generated by a combustion engine of the chassis2600. In some examples, additional batteries are disposed within the chassis2600(e.g., in the front chassis portion2604, in the rear chassis portion2606, etc.). In such examples, the additional batteries can improve the performance of the vehicle associated with the chassis2600(e.g., improved range, greater power available for the engine, etc.). In some examples, the central battery arrays2610and/or one or both of the side battery arrays2612A,2612are absent (e.g., in examples with two side rails, etc.). The wheels2614A,2614B,2614C,2614D can be coupled to the chassis2600after corresponding component(s) (e.g., axles, the suspension assemblies, etc.) of the chassis2600are coupled to the frame2602. In some examples, the type of the wheel2614A,2614B,2614C,2614D (e.g., tread type, wheel diameter, wheel width, etc.) can be selected based on the type and/or model of the vehicle associated with the chassis2600. Additionally or alternatively, the type and/or size of the wheels2614A,2614B,2614C,2614D can be selected based on properties of the chassis2600(e.g., the length of the longitudinal members2620,2622,2630,2632. The crossmembers2616,2618,2626,2628extend generally laterally between the driver and passenger sides of the chassis2600. The crossmembers2616,2618,2626,2628increase the strength of the chassis2600and protect vehicle components. In some examples, the crossmembers2616,2618,2626,2628include additional features (e.g., bolt holes, weld surfaces, etc.) that enable additional vehicle components to be coupled thereto. In the illustrated example ofFIG.26, the chassis2600includes four crossmembers (e.g., the crossmembers2616,2618,2626,2628, etc.). In other examples, the chassis2600includes a different quantity of crossmembers (e.g., two cross members, four cross members, etc.). The crossmembers2616,2618,2626,2628can be composed of steel, aluminum, and/or any other suitable material(s). The first longitudinal member2620and the second longitudinal member2622extend longitudinally between the first crossmember2616and second crossmember2618. The third longitudinal member2630and fourth longitudinal members2632extend longitudinally between the third crossmember2626and fourth crossmember2628. The longitudinal members2620,2622,2630,2632can be composed of steel, aluminum, and/or any other suitable material(s). In some examples, the longitudinal members2620,2622,2630,2632can include features that enable suspension components to be coupled thereto. The cavities2624,2634are areas of the chassis2600in which powertrain components, drivetrain components, and/or suspension components can be coupled. In the illustrated example ofFIG.26, the first cavity2624is defined by the first crossmember2616, the second crossmember2618, the first longitudinal member2620, and the second longitudinal member2622. In the illustrated example ofFIG.26, the second cavity2634is defined the third crossmember2626, the fourth crossmember2628, the third longitudinal member2630, and the fourth longitudinal member2632. In some examples, the crossmembers2616,2618,2626,2628and/or the longitudinal members2620,2622,2630,2632can include features (e.g., weld surfaces, apertures, brackets, brushings, etc.) that enable powertrain components, drivetrain components, and/or suspension components to be coupled with the corresponding one of the cavities2624,2634. In the illustrated example ofFIG.26, the cavities2624,2634are of substantially the same size. In other examples, the first cavities2624and the second cavity2634have different sizes. The coupling of components of the interchangeable performance packages2700,2714,2728within the first cavity2624and/or the second cavity2634is described in greater detail below in conjunction withFIG.28. FIG.27Ais a perspective view of an example first interchangeable performance package2700. In the illustrated example ofFIG.27A, the first interchangeable performance package2700includes an example first electric motor2702that includes an example first motor mounting feature2704A and an example second motor mounting feature2704B. In the illustrated example ofFIG.27A, the first interchangeable performance package2700includes an example first suspension assembly2706A and an example second suspension assembly2706B. In the illustrated example ofFIG.27A, the suspension assemblies2706A,2706B include an example first elastic member2708A and an example second elastic member2708B, respectively. In the illustrated example ofFIG.27A, the suspension assemblies2706A,2706B include an example first wheel mounting feature2710A and an example second wheel mounting feature2710B, respectively. In the illustrated example ofFIG.27A, the suspension assemblies2706A,2706B include an example first frame mounting feature2712A and an example second frame mounting feature2712B, respectively. FIG.27Bis a perspective view of an example second interchangeable performance package2714. In the illustrated example ofFIG.27B, the second interchangeable performance package2714includes an example second electric motor2716that includes an example third motor mounting feature2718A and an example fourth motor mounting feature2718B. In the illustrated example ofFIG.27B, the second interchangeable performance package2714includes an example third suspension assembly2720A and an example fourth suspension assembly2720B. In the illustrated example ofFIG.27B, the suspension assemblies2720A,2720B include an example third elastic member2722A and an example fourth elastic member2722B, respectively. In the illustrated example ofFIG.27B, the suspension assemblies2720A,2720B include an example third wheel mounting feature2724A and an example fourth wheel mounting feature2724B, respectively. In the illustrated example ofFIG.27B, the suspension assemblies2720A,2720B include an example third frame mounting feature2726A and an example fourth frame mounting feature2726B, respectively. FIG.27Cis a perspective view of an example third interchangeable performance package2728. In the illustrated example ofFIG.27C, the third interchangeable performance package2728includes an example third electric motor2730that includes an example fifth motor mounting feature2732A and an example sixth motor mounting feature2732B. In the illustrated example ofFIG.27C, the third interchangeable performance package2728includes an example fifth suspension assembly2734A and an example sixth suspension assembly2734B. In the illustrated example ofFIG.27C, the suspension assemblies2734A,2734B include an example fifth elastic member2736A and an example sixth elastic member2736B, respectively. In the illustrated example ofFIG.27C, the suspension assemblies2734A,2734B include an example fifth wheel mounting feature2740A and an example sixth wheel mounting feature2740B, respectively. In the illustrated example ofFIG.27C, the suspension assemblies2734A,2734B include an example second frame mounting feature2742A and an example sixth frame mounting feature2742B, respectively. The first interchangeable performance package2700includes features that make the first interchangeable performance package2700suitable for a passenger vehicle. In the illustrated example ofFIG.27A, the electric motor2700has performance characteristics that make the electric motor2702suitable for use on streets and/or highways. Similarly, the suspension assemblies2706A,2706B have characteristics that make them more suitable for consumer comfort (e.g., comparatively less stiff elastic members, progressive spring rates, neutral camber, neutral caster, etc.). The second interchangeable performance package2714includes features that make the second interchangeable performance package2714suitable for heavier consumer and/or commercial vehicles. In the illustrated example ofFIG.27B, the electric motor2716has performance characteristics that make the electric motor2716suitable for use on rough terrain and/or hauling larger loads (e.g., comparatively high torque, comparative high horsepower, etc.). Similarly, the suspension assemblies2720A,2720B have characteristics that make them more suitable for use with comparatively higher loads and/or use on uneven terrain (e.g., comparatively less stiff elastic members, greater travel, greater load capacity, progressive spring rates, positive camber, neutral caster, etc.). The third interchangeable performance package2728includes features that make the performance package2710suitable for a sports vehicle. In the illustrated example ofFIG.27C, the electric motor2716has performance characteristics that make the electric motor2716suitable for use on a smooth uniform surface (e.g., comparatively high horsepower, comparatively high torque, etc.). Similarly, the suspension assemblies2720A,2720B have characteristics that make them more suitable for use with a comparatively light vehicle on a smooth surface (e.g., comparatively more stiff elastic members, low travel, linear spring rates, negative camber, positive caster, etc.). The electric motors2702,2716,2730are powertrain components that transform electric power from batteries into mechanical energy and can be used to drive the wheels of a vehicle (e.g., the wheels2614A,2614B,2614C,2614C, etc.). As described above, the electric motors2702,2716,2730have different performance characteristics. That is, the electric motor2702has lower torque and horsepower than the electric motors2716,2730. The electric motor2716has higher torque than the electric motors2702,2730and similar horsepower to the electric motor2730. The electric motor2730has higher horsepower than the electric motor2702and similar horsepower to the electric motor2730. The elastic members2708A,2708B,2722A,2722B,2736A,2736B include at least one spring and/or damper to deflect in response to a load (e.g., increased, decreased load on the vehicle, from uneven terrain, etc.) being applied to the corresponding one(s) of the suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B. In some examples, the elastic members2708B,2722A,2722B,2736A,2736B can include hydraulic and/or electromagnetic dampers. As described above, the corresponding sets of elastic members2708A,2708B,2722A,2722B,2736A,2736B of the suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B have different stiffnesses, damping properties, and load capacities. That is, the elastic members2708A,2708B are generally configured for passenger vehicles (e.g., comparatively less stiff, etc.). The elastic members2722A,2722B are generally configured for commercial vehicles (e.g., comparatively less stiffness, comparatively higher damping, comparatively higher travel, comparatively higher capacity, etc.). The elastic members2736A,2736B are generally configured for performance vehicles (e.g., comparatively greater stiffness, comparatively lower travel, comparatively lower capacity, etc.). Additionally or alternatively, the elastic members (e.g., the elastic members2736A,2736B, etc.) associated with the higher performance packages (e.g., the third interchangeable performance package2728, etc.) can include linear spring rates and the elastic members associated with passenger and/or commercial vehicles (e.g., the elastic members2708A,2708B,2722A,2722B, etc.) can include progressive spring rates. The suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B are additionally configured to receive corresponding wheels at different caster angles and camber angles. That is, in the illustrated examples ofFIGS.27A-27C, the suspension assemblies706A,2706B,2720A,2720B are configured to receive wheels at a positive cambers and neutral camber and the suspension assemblies2734A,2734B are configured to receive wheels at a negative camber and a positive caster. While only the three interchangeable performance packages2700,2714,2728are described in conjunction withFIGS.27A-27C, other performance package configurations are possible. For example, another example performance package for lighter off-road vehicles includes a comparatively powerful electric motor (e.g., second electric motor2716and/or the third electric motor2730, etc.) and comparatively less stiff suspension assemblies (e.g., the suspension assemblies2706A,2706B). In other examples, other performance packages can include any suitable combination of components. FIG.28is a perspective view of the example chassis2600ofFIG.26and the interchangeable performance packages2700,2714,2728ofFIG.27A-27C. In the illustrated example ofFIG.28, an example first performance package2802is coupled to the front chassis portion2604, and an example second performance package2804is coupled to the rear chassis portion2606. In the illustrated example ofFIG.28, the first performance package2802and the second performance package2804can be implemented by the first performance package2700, the second interchangeable performance package2714, and/or the third interchangeable performance package2728. The corresponding motor mounting features of the interchangeable performance packages2700,2714,2728(e.g., the motor mounting features2704A,2704B of the first performance package2700, the motor mounting features2718A,2718B of the second interchangeable performance package2714, the motor mounting features2732A,2732B of the third interchangeable performance package2728, etc.) can be coupled to the inboard surfaces of the corresponding ones of the longitudinal members2620,2622,2630,2632via one or more of fastening techniques(s), thereby coupling the corresponding electric motors2702,2716,2730within the corresponding ones of the cavities2624,2634. For example, the corresponding mounting features2704A,2704B,2718A,2718B,2732A,2732B can be implemented by one or more bushings that receive corresponding inboard protrusions extending from the longitudinal members2620,2622,2630,2632, which damp vibration generated by the corresponding electric motors2702,2716,2730. In other examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be implemented by outboard extending features to be received by bushings associated with the longitudinal members2620,2622,2630,2632, which damp vibration generated by the corresponding electric motors2702,2716,2730. Additionally or alternatively, the motor mounting features2704A,2704B,2718A,2718B,2732A,2732B of the electric motors2702,2716,2730can be coupled to the corresponding longitudinal members2620,2622,2630,2632via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. The corresponding suspension assemblies of the interchangeable performance packages2700,2714,2728(e.g., the suspension assemblies2706A,2706B of the first interchangeable performance package2700, the suspension assemblies2720A,2720B of the second interchangeable performance package2714, the suspension assemblies2734A,2734B of the third interchangeable performance package2728, etc.) can be coupled to the corresponding outboard surfaces of the longitudinal members2620,2622,2630,2632via fastening techniques(s) (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) and via the respective ones of the frame mounting features2712A,2712B,2726A,2726B,2742A,2742B, etc. The wheels2614A,2614B,2614C,2614D can be coupled to the corresponding ones of the wheel mounting features of the interchangeable performance packages2714,2728(e.g., the wheel mounting features2710A,2710B of the first performance package2700, the wheel mounting features2724A,2724B of the second interchangeable performance package2714, the wheel mounting features2740A,2740B of the third interchangeable performance package2728, etc.). In some examples, the wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B can be implemented by a wheel hub, which includes protrusions to be received by corresponding apertures of the wheels2614A,2614B,2614C,2614D. In other examples, the wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B can be implemented by any other suitable means. Each of the interchangeable performance packages2700,2714,2728are couplable to the chassis2600. As such, the chassis2600supports various performance configurations with only the changing of the performance packages2802,2804to be different ones of the interchangeable performance packages2700,2714,2728. Accordingly, the chassis2600can be easily configured to support different vehicle models and/or types, which increases the ease of manufacturing and assembly by reducing the total number of unique parts used between vehicles. When combined with the other teachings of this disclosure (e.g., the scalable chassis1900ofFIG.19, the scalable chassis2300ofFIG.23, etc.), disparate vehicle types (e.g., pick-up trucks and compacts, etc.) can be implemented to share chassis with similar designs and a comparatively large number of common parts. In the illustrated example ofFIG.28, the performance packages2802,2804are implemented by a same one of the interchangeable performance packages2700,2714,2728. In other examples, the first performance package2802can be implemented by a different one of the second performance package2804(e.g., the first performance package2802implemented by the first interchangeable performance package2700and the second performance package2804implemented by the second interchangeable performance2714, etc.). FIG.29is a flowchart representative of an example method2900to assemble the example chassis ofFIGS.26and28with one of the interchangeable performance packages ofFIGS.27A-27C. At block2902, the model of the vehicle associated with the chassis2600is determined. For example, the model of the vehicle can be determined to be a pick-up truck model, a compact model, an SUV model, a crossover model, a van model, etc. In some examples, the desired performance characteristics (e.g., engine torque, engine power, suspension characteristics is determined). At block2904, one of the interchangeable performance packages2700,2714,2718is selected based on the determined model of the vehicle. For example, if the model of the vehicle is a passenger model, the first interchangeable performance package2700is selected. If the model of the vehicle is a hauling model, the second interchangeable performance package2714is selected. If the model of the vehicle is a performance model, the third interchangeable performance package2728is selected. In other examples, other suitable performance packages can be selected based on the model. In some examples, multiple performance packages can be selected. In such examples, the selected performance packages can be coupled to different portions of the chassis2600(e.g., the first interchangeable performance package2700may be coupled within the first cavity2624, the second performance package coupled within the second cavity2634, etc.). At block2906, the electric motor(s) of the selected performance package is coupled within the chassis cavity. For example, instances of the corresponding electric motor of the selected performance package (e.g., the first electric motor2702of the first interchangeable performance package2700, the second electric motor2716of the second interchangeable performance package2714, the third electric motor2730of the third performance package, etc.) can be coupled within the first cavity2624of the chassis2600and the second cavity2634via the corresponding motor mounting features (e.g., the first motor mounting feature2704A and the second motor mounting feature2704B of the first interchangeable performance package2700, the third motor mounting feature2718A and the fourth motor mounting feature2718B of the second interchangeable performance package2714, the fifth motor mounting feature2732A and the sixth motor mounting feature2732B of the third interchangeable performance package2728, etc.). In some examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be coupled to inboard surfaces of cavities2624,2634via bushing connections. In other examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be coupled to inboard surfaces of the cavities2624,2634via any other suitable fastening technique (e.g., a press-fit, a weld, a chemical adhesive, a fastener, etc.). At block2908, the suspension assemblies of the selected performance packages are coupled to chassis2600via the corresponding frame mounting features. For example, instances of the corresponding suspension assemblies of the selected performance package (e.g., the first suspension assembly2706A and the second suspension assembly of the first interchangeable performance package2700, the third suspension assembly2720A and the fourth suspension assembly2720B of the second interchangeable performance package2714, the fifth suspension assembly2734A and the sixth suspension assembly2734B of the third interchangeable performance package2728, etc.) can be coupled to the chassis2600via the corresponding frame mounting features (e.g., the first frame mounting feature2712A and the second frame mounting feature2712B of the first interchangeable performance package2700, the third frame mounting feature2726A and the fourth frame mounting feature2726B of the second interchangeable performance package2714, the fifth frame mounting feature2742A and the sixth frame mounting feature2742B of the third interchangeable performance package2728, etc.). In some examples, the corresponding frame mounting features2712A,2712B,2726A,2726B,2742A,2742B can be coupled to outboard surfaces of corresponding ones of the longitudinal members2620,2622,2630,2632via any other suitable fastening technique (e.g., a press-fit, a weld, a chemical adhesive, a fastener, etc.). At block2910, the wheels2614A,2614B,2614C,2614D are coupled to the suspension assemblies. For example, the wheels2614A,2614B,2614C,2614D can be coupled to the corresponding wheel mounting features (e.g., the first wheel mounting feature2710A and the second wheel mounting feature2710B of the first interchangeable performance package2700, the third wheel mounting feature2724A and the fourth wheel mounting feature2724B of the second interchangeable performance package2714, the fifth wheel mounting feature2740A and the sixth wheel mounting feature2740B of the third interchangeable performance package2728, etc.). In some examples, the corresponding wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B can be implemented via wheel hub, which can receive corresponding apertures of the wheels2614A,2614B,2614C,2614D. In other examples, the wheels2614A,2614B,2614C,2614D can be coupled to the corresponding suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B via any other suitable fastening technique. The method2900ends. FIGS.30A-35depict alternative vehicle chassis that may be used to implement the teachings of this disclosure that are similar to those described with referenceFIGS.30A-35. When the same reference number is used in connection withFIGS.30A-35as used inFIGS.26-29, it has the same meaning unless indicated otherwise. FIG.30Ais a perspective view of an example interchangeable first subframe3000including the first interchangeable performance package2700ofFIG.27A. In the illustrated example ofFIG.30A, the first interchangeable subframe3000includes an example first crossmember3004, an example second crossmember3006, an example first side rail3008, and an example second side rail3010. In the illustrated example ofFIG.30, the first interchangeable performance package2700is coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010. For example, the first electric motor2702is coupled to an inboard surface of the first side rail3008and the second side rail3010via the first motor mounting feature2704A and the second motor mounting feature2704B, respectively. The example suspension assembly2706A is coupled to an example first wheel3012via the example first wheel mounting feature2710A and is coupled to an outboard surface of the side rail3008via the example first frame mounting feature2712A. The example suspension assembly2706B is coupled to an example second wheel3014via the example second wheel mounting feature2710B and is coupled to an outboard surface of the side rail3008via the example second frame mounting feature2712B. FIG.30Bis a perspective view of an example second interchangeable subframe3016including the second interchangeable performance package2714ofFIG.27B. The example second interchangeable subframe3016includes the example first crossmember3004ofFIG.30A, the example second crossmember3006ofFIG.30A, the example first side rail3008ofFIG.30A, and the example second side rail3010ofFIG.30A. In the illustrated example ofFIG.30B, the second interchangeable performance package2714is coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010. For example, the second electric motor2716is coupled to an inboard surface of the first side rail3008and the second side rail3010via the third motor mounting feature2718A and the fourth motor mounting feature2718B, respectively. The example third suspension assembly2720A is coupled to the first wheel3012ofFIG.30Avia the example third wheel mounting feature2724A and is coupled to an outboard surface of the side rail3008via the example third frame mounting feature2726A. The example fourth suspension assembly2720B is coupled to the second wheel3014ofFIG.30Bvia the example fourth wheel mounting feature2724B and is coupled to an outboard surface of the side rail3008via the example fourth frame mounting feature2726B. FIG.30Cis a perspective view of an example third interchangeable subframe3018including the third interchangeable performance package2728ofFIG.27C. The example third interchangeable subframe3018includes the example first crossmember3004ofFIG.30A, the example second crossmember3006ofFIG.30A, the example first side rail3008ofFIG.30A, and the example second side rail3010ofFIG.30A. In the illustrated example ofFIG.30C, the third interchangeable performance package2728is coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010. For example, the third electric motor2730is coupled to an inboard surface of the first side rail3008and the second side rail3010via the fifth motor mounting feature2732A and the sixth motor mounting feature2732B, respectively. The example suspension assembly2734A is coupled to the first wheel3012ofFIG.30Avia the example fifth wheel mounting feature2740A and is coupled to an outboard surface of the side rail3008via the example frame mounting feature2742A. The example suspension assembly2734B is coupled to the second wheel3014ofFIG.30Bvia the example sixth wheel mounting feature2740B and is coupled to an outboard surface of the side rail3008via the example sixth frame mounting feature2742B. In the illustrated example ofFIGS.30A-30C, the interchangeable performance packages2700,2714,2728ofFIGS.27A-27Care components of corresponding interchangeable subframes3000,3016,3018. The interchangeable subframes3000,3016,3018include common structural members (e.g., the first crossmember3004, the second crossmember3006, the first side rail3008, the second side rail3010, etc.). The motor mounting features2704A,2704B,2718A,2718B,2732A,2732B of the corresponding interchangeable performance packages2700,2714,2728are coupled to internal faces of the side rails3008,3010of the corresponding interchangeable subframes3000,3016,3018. In some examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be implemented by bushings which receive corresponding inboard protrusions extending from the side rails3008,3010, which damp vibration generated by the respectively electric motors2702,2716,2730. In other examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be implemented by outboard extending features to be received by bushings associated with the crossmembers3004,3006and/or side rails3008,3010which damp vibration generated by the electric motors2702,2716,2730. In other examples, the corresponding motor mounting features2704A,2704B,2718A,2718B,2732A,2732B can be coupled to the corresponding side rails3008,3010via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In the illustrated example ofFIG.30A-30C, the suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B are coupled to outboard surfaces of the side rails3008,3010via corresponding ones of the frame mounting features2712A,2712B,2726A,2726B,2742A,2742B. The corresponding frame mounting features2712A,2712B,2726A,2726B,2742A,2742B can be coupled to the corresponding side rails3008,3010via any fastening technique (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In the illustrated example ofFIG.30A-30C, the suspension assemblies2706A,2706B,2720A,2720B,2734A,2734B are coupled to the wheels3012,3014via corresponding ones of the wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B. The corresponding wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B can be implemented by a wheel hub, which includes protrusions to be received by corresponding apertures of the wheels3012,3014. In other examples, the wheel mounting features2710A,2710B,2724A,2724B,2740A,2740B can be implemented by any other suitable means. FIG.31is a perspective view of an example vehicle chassis3100including features to receive the interchangeable subframes3000,3016,3018ofFIG.30A-30C. The interchangeable subframes3000,3016,3018are couplable within the first cavity2624of the front chassis portion2604. For example, the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010of one of the interchangeable subframes3000,3016,3018can be coupled to a corresponding structural member of the chassis3100. For example, the first crossmember3004of one of the interchangeable subframes3000,3016,3018can be coupled to the crossmember2616of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second crossmember3006of one of the interchangeable subframes3000,3016,3018can be coupled the second crossmember2618of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the first side rail3008of one of the interchangeable subframes3000,3016,3018can be coupled the first longitudinal member2620of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second side rail3010of one of the interchangeable subframes3000,3016,3018can be coupled the second longitudinal member2622of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. Additionally or alternatively, one of the interchangeable subframes3000,3016,3018can be coupled to the front chassis portion2604via one or more bushings and/or brackets. The interchangeable subframes3000,3016,3018are couplable within the second cavity2634of the rear chassis portion2606. For example, the common the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010of one of the interchangeable subframes3000,3016,3018can be coupled to a corresponding structural member of the chassis3100. For example, the first crossmember3004of one of the interchangeable subframes3000,3016,3018can be coupled to the third crossmember2626of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second crossmember3006of one of the interchangeable subframes3000,3016,3018can be coupled the fourth crossmember2628of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the first side rail3008of one of the interchangeable subframes3000,3016,3018can be coupled the third longitudinal member2630of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second side rail3010of one of the interchangeable subframes3000,3016,3018can be coupled the fourth longitudinal member2632of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. Additionally or alternatively, one of the interchangeable subframes3000,3016,3018can be coupled to the rear chassis portion2606via one or more bushings and/or brackets. As such, the chassis3100can be configured to include different ones of the interchangeable performance packages2700,2714,2728via the interchanging of the interchangeable subframes3000,3016,3018. Accordingly, the chassis3100can be easily configured to support different vehicle models and/or types via of interchanging of the interchangeable subframes3000,3016,3018, which increases the ease manufacturing and assembly by reducing the total number of unique parts used between vehicles. When combined with the other teachings of this disclosure (e.g., the scalable chassis1900ofFIG.19, the scalable chassis2300ofFIG.23, etc.), disparate vehicle types (e.g., pick-up trucks and compacts, etc.) can be implemented to share a common chassis with similar designs and a comparatively large number of common parts. FIG.32is a flowchart representative of an example method to assemble the example chassis ofFIG.31with one of the interchangeable subframes ofFIG.30A-30C. At block3202, the model of the vehicle associated with the chassis3100is determined. For example, the model of the vehicle can be determined to be a pick-up truck model, a compact model, an SUV model, a crossover model, a van model, etc. In some examples, the desired performance characteristics (e.g., engine torque, engine power, suspension characteristics is determined). At block3204, one of the interchangeable performance packages2700,2714,2718is selected based on the determined model of the vehicle. For example, if the model of the vehicle is a passenger model, the first interchangeable performance package2700is selected. If the model of the vehicle is a hauling model and/or a heavier passenger model, the second interchangeable performance package2714is selected. If the model of the vehicle is a performance model, the third interchangeable performance package2728is selected. In other examples, other suitable performance packages can be selected based on the model. In some examples, multiple performance packages can be selected. In such examples, the subframes associated with the selected performance packages can be coupled to different portions of the chassis3100(e.g., the first interchangeable subframe3000coupled within the first cavity2624, the second interchangeable subframe3016coupled within the second cavity2634, etc.). At block3206, the subframe associated with the selected performance package is selected. For example, if the first interchangeable performance package2700was selected, the first interchangeable subframe3000can be selected. If the second interchangeable performance package2714was selected, the second interchangeable subframe3016is selected. If the third interchangeable performance package2728was selected, the third interchangeable subframe3018is selected. At block3208, the selected subframe including the selected performance package is assembled. For example, the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010of the selected one of the interchangeable subframes3000,3016,3018can be assembled via suitable fastening technique(s) (e.g., welds, press-fits, chemical adhesive, fastener(s), etc.). If the first interchangeable subframe3000was selected, the first interchangeable performance package2700can be coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010via the first motor mounting feature2704A, the second motor mounting feature2704B, the first frame mounting feature2712A, and the frame mounting feature2712B. In some examples, the first wheel3012and the second wheel3014can be coupled to the first interchangeable subframe3000via the first wheel mounting feature2710A and the second wheel mounting feature2710B, respectively. If the second interchangeable subframe3016was selected, the second interchangeable performance package2714can be coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010the third motor mounting feature2718A, the fourth motor mounting feature2718B, the third frame mounting feature2726A, and the fourth frame mounting feature2726B. In some examples, the first wheel3012and the second wheel3014can be coupled to the second interchangeable subframe3016via the third wheel mounting feature2724A and the fourth wheel mounting feature2724B, respectively. If the third interchangeable subframe3018was selected, the second interchangeable performance package2714can be coupled to the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010the fifth motor mounting feature2732A, the sixth motor mounting feature2732B, the fifth frame mounting feature2742A, and the sixth frame mounting feature2742B. In some examples, the first wheel3012and the second wheel3014can be coupled to the third interchangeable subframe3018via the fifth wheel mounting feature2740A and the sixth wheel mounting feature2740B, respectively. At block3210, the assembled subframes are coupled to the chassis3100. For example, the first crossmember3004, the second crossmember3006, the first side rail3008, and the second side rail3010can be coupled to the corresponding structural members of the chassis3100. For example, the first crossmember3004of one of the interchangeable subframes3000,3016,3018can be coupled to the third crossmember2626of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second crossmember3006of one of the interchangeable subframes3000,3016,3018can be coupled the fourth crossmember2628of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the first side rail3008of one of the interchangeable subframes3000,3016,3018can be coupled the third longitudinal member2630of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. In some examples, the second side rail3010of one of the interchangeable subframes3000,3016,3018can be coupled the fourth longitudinal member2632of the chassis3100via one or more fastening techniques (e.g., a fastener, a weld, a chemical adhesive, a press-fit, etc.) or combination thereof. Additionally or alternatively, one of the interchangeable subframes3000,3016,3018can be coupled to the rear chassis portion2606via one or more bushings and/or brackets. The method3200ends. FIG.33Ais a perspective view of an example first interchangeable chassis portion3300including the first interchangeable performance package2700ofFIG.27A. In the illustrated example ofFIG.33A, the elements of the first interchangeable performance package2700(e.g., the first electric motor2702, the first suspension assembly2706A, the second suspension assembly2706B, etc.) are coupled within the first interchangeable chassis portion3300. In the illustrated example ofFIG.33A, the suspension assemblies2706A,2706B are coupled to an example first wheel3304and an example second wheel3306, respectively. The example first interchangeable chassis portion3300includes example first attachment locators3302. FIG.33Bis a perspective view of an example second interchangeable chassis portion3308including the interchangeable performance package2714ofFIG.27B. In the illustrated example ofFIG.33B, the elements of the second interchangeable performance package2714(e.g., the second electric motor2716, the third suspension assembly2720A, the fourth suspension assembly2720B, etc.) are coupled within the second interchangeable chassis portion3308. In the illustrated example ofFIG.33B, the suspension assemblies2720A,2720B are coupled to an example first wheel3312and an example second wheel3314, respectively. The example second interchangeable chassis portion3308includes example second attachment locators3310. FIG.33Cis a perspective view of an example third interchangeable chassis portion3316including the third interchangeable performance package2728ofFIG.27C. In the illustrated example ofFIG.33C, the elements of the third interchangeable performance package2728(e.g., the third electric motor2730, the fifth suspension assembly2734A, the sixth suspension assembly2734B, etc.) are coupled within the third interchangeable chassis portion3316. In the illustrated example ofFIG.33C, the suspension assemblies2734A,2734B are coupled to an example first wheel3320and an example second wheel3322, respectively. The example third interchangeable chassis portion3316includes example third attachment locators3318. In the illustrated example ofFIGS.33A-33C, the interchangeable chassis portions3300,3308,3316can be implemented as both front chassis portions (e.g., the front chassis portion2604ofFIG.26, etc.) or rear chassis portions (e.g., the rear chassis portion2606ofFIG.26, etc.). In other examples, side-specific chassis portions can be used. In such examples, the interchangeable chassis portions3300,3308,3316can be divided into corresponding front interchangeable chassis portions and corresponding rear interchangeable chassis portions. In the illustrated example ofFIGS.33A-33C, the interchangeable chassis portions3300,3308,3316have similar designs and components as the interchangeable chassis portions2302A,2302B,2304A,2304B ofFIG.23. In other examples, the in the interchangeable chassis portions3300,3308,3316can have any other suitable design and can include different components. FIG.34is a perspective view of another example vehicle chassis3400that includes a plurality of the interchangeable chassis portions3300,3308,3316ofFIG.33A-32C. The example vehicle chassis3400includes an example battery platform3402, which includes example fourth attachment locators3404and example fifth attachment locators3406. The battery platform3402is a common component shared between different configurations of the chassis3400. The example battery platform3402includes a plurality of structural members (e.g., crossmembers, side rails, etc.) and EV batteries. The fourth attachment locators3404can be coupled to the corresponding first attachment locators3302of the interchangeable chassis portion3300, the corresponding second attachment locators3310of the second interchangeable chassis portion3308, or the corresponding third attachment locators3318of the third interchangeable chassis portion3316. The fifth attachment locators3406can be coupled to the corresponding first attachment locators3302of the first interchangeable chassis portion3300, the corresponding second attachment locators3310of the second interchangeable chassis portion3308, or the corresponding third attachment locators3318of the third interchangeable chassis portion3316. In the illustrated example ofFIG.34, the attachment locators3302,3310,3318of the interchangeable chassis portions3300,3308,3316include protrusions to be received by corresponding apertures of the attachment locators3404,3406of the battery platform3402. In other examples, the attachment locators3404,3406of the battery platform3402include protrusions to be received by the attachment locators3302,3310,3318. Additionally or alternatively, the front of the battery platform3402can be coupled to a corresponding one of the interchangeable chassis portions3300,3308,3316, and the rear of the battery platform3402can be coupled to a corresponding one of the interchangeable chassis portions3300,3308,3316via additional fastening techniques (e.g., welds, press-fits, chemical adhesives, fasteners, etc.). The interchangeable chassis portions3300,3308,3316are couplable to the front and rear of the battery platform3402. Depending on which of the interchangeable chassis portions3300,3308,3316is coupled to the front of the battery platform3402and which of the interchangeable chassis portions3300,3308,3316is coupled to the rear of the battery platform3402, the performance characteristics of the chassis3400can be changed. FIG.35is a flowchart representative of an example method to assembly the example chassis ofFIG.34with one of the interchangeable chassis portions3300,3308,3316ofFIG.33A-33C. At block3502, the model of the vehicle associated with the chassis3400is determined. For example, the model of the vehicle can be determined to be a pick-up truck model, a compact model, an SUV model, a crossover model, a van model, etc. In some examples, the desired performance characteristics (e.g., engine torque, engine power, suspension characteristics is determined). At block3508, one of the interchangeable performance packages2700,2714,2718is selected based on the determined model of the vehicle. For example, if the model of the vehicle is a passenger model, the first interchangeable performance package2700is selected. If the model of the vehicle is a hauling model, the second interchangeable performance package2714is selected. If the model of the vehicle is a performance model, the third interchangeable performance package2728is selected. In other examples, other suitable performance packages can be selected based on the model. In some examples, multiple performance packages can be selected. In such examples, different ones of the interchangeable chassis portions3300,3308,3316can be coupled to the front and rear of the battery platforms3402. At block3506, the chassis portion associated with the selected performance package is selected. For example, if the first interchangeable performance package2700was selected, the first interchangeable chassis portion3300can be selected. If the second interchangeable performance package2714was selected, the second interchangeable chassis portion3308is selected. If the third interchangeable performance package2728was selected, the third interchangeable chassis portion3118is selected. At block3508, the selected chassis portion(s) including the selected performance package are assembled. For example, the structural members of the selected chassis portions can be assembled in a manner similar to the chassis portions2304A,2304B,2306B,2306B ofFIG.23. If the first interchangeable chassis portion3300was selected, the first interchangeable performance package2700can be coupled to the first interchangeable chassis portion3300via the first motor mounting feature2704A, the second motor mounting feature2704B, the first frame mounting feature2712A, and the frame mounting feature2712B. In some examples, the first wheel3312and the second wheel3314can be coupled to the first interchangeable chassis portion3300via the first wheel mounting feature2710A and the second wheel mounting feature2710B, respectively. If the second interchangeable chassis portion3308was selected, the second interchangeable performance package2714can be coupled to the second interchangeable chassis portion3308via the third motor mounting feature2718A, the fourth motor mounting feature2718B, the third frame mounting feature2726A, and the fourth frame mounting feature2726B. In some examples, the first wheel3312and the second wheel3314can be coupled to the second interchangeable chassis portion3308via the third wheel mounting feature2724A and the fourth wheel mounting feature2724B, respectively. If the third interchangeable chassis portion3316was selected, the third interchangeable performance package2728can be coupled to the third interchangeable chassis portion3316the fifth motor mounting feature2732A, the sixth motor mounting feature2732B, the fifth frame mounting feature2742A, and the sixth frame mounting feature2742B. In some examples, the first wheel3312and the second wheel3314can be coupled to the third interchangeable chassis portion3316via the fifth wheel mounting feature2740A and the sixth wheel mounting feature2740B, respectively. At block3510, the selected one of the interchangeable chassis portions3300,3308,3316is coupled to the front of the battery platform3402. For example, if the first interchangeable chassis portion3300is selected, the first attachment locators3302are coupled to the fourth attachment locators3404. If the second interchangeable chassis portion3308was selected, the second attachment locators3310are coupled to the fourth attachment locators3404. If the third interchangeable chassis portion3316was selected, the third attachment locators3318are coupled to the fourth attachment locators3404. In some examples, the attachment locators3302,3310,3318include protrusions to be received by corresponding apertures of the fourth attachment locator3404of the battery platform3402. In other examples, the fifth attachment locators3404include protrusions to be received by the attachment locators3302,3310,3318. Additionally or alternatively, the front of the battery platform3402can be coupled to the selected one of the interchangeable chassis portions3300,3308,3316via additional fastening techniques (e.g., welds, press-fits, chemical adhesives, fasteners, etc.). At block3512, the selected one of the interchangeable chassis portions3300,3308,3316is coupled to the rear of the battery platform3402. For example, if the first interchangeable chassis portion3300is selected, the first attachment locators3302are coupled to the fifth attachment locators3406. If the second interchangeable chassis portion3308was selected, the second attachment locators3310are coupled to the fifth attachment locators3406. If the third interchangeable chassis portion3316was selected, the third attachment locators3318are coupled to the fifth attachment locators3406. In some examples, the attachment locators3302,3310,3318include protrusions to be received by corresponding apertures of the fifth attachment locators3406of the battery platform3402. In other examples, the fifth attachment locators3406of the battery platform3402include protrusions to be received by the attachment locators3302,3310,3318. Additionally or alternatively, the rear of the battery platform3402can be coupled to the selected one of the interchangeable chassis portions3300,3308,3316via additional fastening techniques (e.g., welds, press-fits, chemical adhesives, fasteners, etc.). The method3500ends. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous. Example methods, apparatus, systems, and articles of manufacture to ______ are disclosed herein. Further examples and combinations thereof include the following: Example 1 includes a vehicle chassis comprising a frame including a first chassis portion including a cavity, a battery platform coupled to first chassis portion, and a first subframe couplable within the cavity, the first subframe including a first motor and a first suspension assembly, and a second subframe couplable within the cavity, the second subframe including a second motor and a second suspension assembly, the second motor having a greater power than the first motor, the second suspension assembly having a greater stiffness than the first suspension assembly. Example 2 includes the vehicle chassis of example 1, wherein the first subframe includes a first wheel and a second wheel, and the second subframe includes a third wheel and a fourth wheel. Example 3 includes the vehicle chassis of example 1, wherein the first subframe includes a first crossmember, a second crossmember, a first side rail, and a second side rail, the first crossmember, the second crossmember, the first side rail and the second side rail defining the cavity. Example 4 includes the vehicle chassis of example 3, wherein the first subframe is couplable within the cavity via at least one a third crossmember couplable to the first crossmember, a fourth crossmember couplable to the second crossmember, a first longitudinal member couplable to the first side rail, or a second longitudinal member couplable to the second side rail. Example 5 includes the vehicle chassis of example 4, wherein the first motor is coupled between the first longitudinal member and the second longitudinal member. Example 6 includes the vehicle chassis of example 1, wherein the vehicle chassis is configured for a first model of vehicle when the first subframe is coupled within the cavity and the vehicle chassis is configured for a second model of vehicle when the first subframe is coupled within the cavity, the second model of vehicle different from the first model of vehicle. Example 7 includes the vehicle chassis of example 6, wherein the first model is a passenger car and the second model is a truck. Example 8 includes a vehicle chassis comprising a first chassis portion including a first longitudinal member and a second longitudinal member, the first longitudinal member and the second longitudinal member defining a first cavity, a second chassis portion, and wherein the vehicle chassis is in a first configuration when a first motor is coupled to a first inboard surface of the first cavity and a first suspension assembly is coupled to a first outboard surface of at least one of the first longitudinal member or the second longitudinal member, and wherein the vehicle chassis is in a second configuration when a second motor is coupled the first inboard surface of the first cavity and a second suspension assembly is coupled to the first outboard surface, the first motor having a greater power than the second motor, the first suspension assembly having a greater stiffness than the second suspension assembly. Example 9 includes the vehicle chassis of example 8, further including a battery platform disposed between the first chassis portion and the second chassis portion. Example 10 includes the vehicle chassis of example 8, wherein the second chassis portion includes a third longitudinal member and a fourth longitudinal member, the third longitudinal member and the fourth longitudinal member defining a second cavity. Example 11 includes the vehicle chassis of example 10, wherein the first configuration includes a third motor coupled to a second inboard surface of the second cavity and a third suspension assembly coupled to a second outboard surface of at least one of the third longitudinal member and the fourth longitudinal member. Example 12 includes the vehicle chassis of example 11, wherein the first motor has a substantially same power as the third motor and the first suspension assembly has a substantially same stiffness as the third suspension assembly. Example 13 includes the vehicle chassis of example 10, wherein the first cavity and the second cavity are of substantially a same size. Example 14 includes the vehicle chassis of example 8, wherein the first configuration is associated with a first model of vehicle associated with the vehicle chassis and the second configuration is associated with a second model of vehicle associated with the vehicle chassis. Example 15 includes a method to assemble a frame of a vehicle, the method comprising assembling a first chassis portion including a first cavity, determining a model of the vehicle, in response to determining the vehicle is a first model selecting a first performance package including a first motor and a first suspension assembly based on the first model, coupling the first motor within the first cavity, and coupling the first suspension assembly to the first chassis portion, and in response to determining the vehicle is a second model selecting a second performance package including second motor and a second suspension assembly based on the second model, the first motor having a different power than the second motor, the first suspension assembly having a different stiffness than the second suspension assembly, coupling the second motor within the first cavity, and coupling the second suspension assembly to the first chassis portion. Example 16 includes the method of example 15, further including assembling a second chassis portion including a second cavity. Example 17 includes the method of example 16, further including coupling the first chassis portion to a front of a battery platform, coupling the second chassis portion to a rear of the battery platform. Example 18 includes the method of example 16, further including, in response to determining the vehicle is the first model coupling a third motor within the second chassis portion, and coupling a third suspension assembly to the second chassis portion. Example 19 includes the method of example 18, wherein the third motor has substantially a same power as the first motor, the third suspension assembly has a same stiffness as the first suspension assembly. Example 20 includes the method of example 16, wherein the first cavity and the second cavity are of substantially a same size. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure. claim standing on its own as a separate embodiment of the present disclosure | 186,879 |
11858572 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Embodiments are related to a parallel cell based mobility production system capable of responding to various customer needs by producing vehicles of various types together when producing vehicles, continuing productions and increasing production without shutdown or separate construction even in the event of an abnormal situation during the process, securing flexibility by setting the operation of each cell individually, rapidly changing production facilities according to rapid changes in product cycles, and installing and operating in factories inside buildings in downtown rather than outside factories. FIG.1is a configuration diagram of a parallel cell based mobility production system according to an embodiment,FIG.2is a diagram showing a passive cell of a parallel cell based mobility production system according to an embodiment,FIG.3is a diagram showing an automatic cell of a parallel cell based mobility production system according to an embodiment, andFIGS.4to5are diagrams showing an operation method of a parallel cell based mobility production system according to an embodiment. FIG.1is a configuration diagram of a parallel cell based mobility production system according to an embodiment. The parallel cell based mobility production system according to an embodiment is a parallel cell based mobility production system capable of producing various vehicle models through a single production system. The system includes a serial production line100which is composed of one or more cells arranged in series, and through which vehicles of various types sequentially pass to be processed, a parallel production line300which is composed of a plurality of sublines arranged in parallel, each subline provided with a plurality of cells arranged in series and matched for each vehicle type, and in which the vehicle passing through the serial production line is fed to a corresponding subline for each vehicle type, and an inspection line500in which the vehicle of various types passing through the parallel production line are sequentially fed. The vehicle production process is very complex and sequential. In the case of an embodiment, this is simplified into a serial production line100, a parallel production line300, and an inspection line500. In addition, the serial production line100is an integrated series of common production processes, and is composed of processes in which the change in task is not large even though the vehicle types are different. In addition, the parallel production line300is composed of a plurality of sublines300and each subline is arranged in parallel between the serial production line100and the inspection line500. Also, each subline is responsible for the production of vehicles of different types. Lastly, the inspection line500is a serial line through which all vehicle types commonly pass by performing sequential inspection on all vehicles. Specifically, an embodiment parallel cell based mobility production system is intended to enable the production of vehicles of various types in one place. When the types of vehicles to be produced are various, there may be a difference in the installation order for each vehicle type. Further, even if the same type of task is performed, there may be a difference in the amount of task. Still further, the difference in optional parts may be large between vehicles, and there may be cases where a customer has ordered a separate special order. Thus, it is difficult to correspond with the conventional conveyor production system. To this end, an embodiment proposes a parallel cell based mobility production system capable of producing various vehicle types through one production system. First, the serial production line100is composed of one or more cells arranged in series, and allows vehicles of various types to sequentially pass through so that the production operation is performed. Further, the parallel production line300is composed of a plurality of sublines300arranged in parallel. Each subline300is provided with a plurality of cells arranged in series. In addition, each vehicle passing through the serial production line100is matched to a subline for each vehicle type, so that the vehicle passing through the serial production line100is fed to a corresponding subline for each vehicle type. Still further, in the inspection line500, the vehicles of various types that have passed through the parallel production line300are sequentially is fed, so that inspection and testing are sequentially performed. First, the serial production line100may perform pre-task and chassis installation process. Specifically, the serial production line100is composed of a plurality of cells arranged in series, and each cell may perform pre-task and chassis installation process sequentially. The serial production line100is composed of sequentially arranged pre-task cell11o, chassis alignment task cell120, drive module installation cell130, and chassis mounting cell140, and the vehicles of various types sequentially pass through each cell so that operation may be performed. For example, in the pre-task cell11o, a vehicle body is supplied and a pre-task process may be performed before chassis task. This may be a manual cell in which manual task is performed, and operations such as sunroof, engine room task, wiring input, etc. are performed. It is represented by Sun Trim in the drawing. In the case of the chassis alignment task cell120, it is represented by CM Manual1in the drawing, and operations such as a brake tube, EPCU, alignment task before decking, etc., are performed. In the case of the drive module installation cell130, it is represented by CM Auto in the drawing, and PE module and high voltage battery mounting are performed. In the case of the chassis mounting cell140, it is represented by CM Manual2in the drawing, and a wheel speed sensor, wheel guard, brake hose, etc., are installed. After such a series of processes are sequentially performed for all vehicles, they are fed on the corresponding sublines for each vehicle type. In particular, by arranging the chassis mounting cell140on the serial production line100, efficiency can be maintained since similar chassis operation is performed in different vehicle types. The parallel production line300can continuously configure trim lines for each vehicle type, so that production flexibility can be secured. In the subline300of the parallel production line300, a plurality of cells to which a unique task is assigned are serially arranged, and a plurality of sublines may share a corresponding cell for a certain task. That is, the subline300of the parallel production line300is composed of sequentially arranged indoor and outdoor trim task cell, optional part mounting cell, bumper mounting cell, wheel mounting cell, and door mounting cell. A vehicle of the type corresponding to the subline may sequentially pass through each cell and the operation may be performed. In addition, the number of optional part mounting cell, wheel mounting cell, and door mounting cell of the subline is provided in a smaller number than the total number of the sublines, so that each subline can share the optional part mounting cell, the wheel the mounting cell, and the door mounting cell with each other. In addition, the indoor and outdoor trim task cell310,320, and330is composed of sequentially arranged first cell310, second cell320, and third cell330. The first cell310is represented by manually T in the drawing, it can perform one or more of vehicle wiring installation, seat belt installation, and roof rack installation. The indoor and outdoor trim task cell is composed of the sequentially arranged first cell, second cell, and third cell. The second cell320is represented by as TA in the drawing, and can perform one or more of crash pad installation, headlining installation, and washer liquid reservoir installation. The indoor and outdoor trim task cell is composed of the sequentially arranged first cell, second cell, and third cell arranged in sequence. The third cell330is represented by F1Manual in the drawing, and can perform one or more of indoor trim installation, indoor console installation, air conditioning duct installation, and floor carpet installation. In addition, the optional part mounting cell340is represented by FA1in the drawing, and may perform one or more of sheet installation, glass installation, and FEM installation. The bumper mounting cell350is represented by F2Manual in the drawing, and may perform one or more of bumper installation, pedal installation, wiper installation, and wiring installation. The wheel mounting cell360is represented by FA2in the drawing, and may perform one or more of wheel installation, tire installation, and undercover installation. The door mounting cell370is represented by F3Manual in the drawing, and in the cell a door and a weather strip are installed. Each vehicle is assigned to a corresponding subline for each vehicle type, moves along the subline in which various parts are mounted. In addition, vehicles discharged from the sublines are sequentially introduced into the inspection line500to undergo various inspections and tests. Meanwhile, as shown in drawings, the serial production line and the inspection line may be provided on both sides, respectively, and the parallel production line may be provided between the serial production line and the inspection line. In addition, the plurality of sublines of the parallel production line is continuously arranged in a vertical direction, and a vehicle type having a large number of productions may pass through the subline aligned above. FIGS.4to5are diagrams showing an operation method of a parallel cell based mobility production system according to an embodiment. As shown inFIG.4, the vehicle with the largest number of productions, for example, small-sized cars, may be produced through the subline300located at a highest level. Accordingly, the vehicle with the largest number of productions can be produced the fastest by forming a moving trajectory as close as possible to a straight line. FIG.5shows that FA1cell is shared, and as shown inFIG.5, different sublines300and300′ share the optional part mounting cell located in the middle so that efficient production can be performed. For example, in the case of a cell that does not vary by vehicle type or is in charge of a relatively fast process, such as the optional part mounting cell, multiple sublines share the cell, thereby reducing production cost and increasing space utilization. In the illustrated embodiment, the number (2 to 3) of the optional part mounting cell, wheel mounting cell, and door mounting cell of the subline is less than the total number (4) of sublines, and each subline can share the optional part mounting cell, the wheel mounting cell, and the door mounting cell with each other. In addition, in the case of vehicles with relatively fast production flows such as small-sized cars, the subline300having the shortest moving distance is used, and in the case of large-sized cars, since the number of productions is small, the subline300′ located below is used. Accordingly, various vehicle types can be produced together, and efficiency can be increased by dualizing the production path when producing vehicles of various types together. On the other hand, these cells can be divided into a manual cell in which a manual operation is performed and an automatic cell in which automatic assemblies are performed by a robot.FIG.2shows an embodiment of the manual cell, andFIG.3shows an embodiment of the automatic cell. In addition, each cell can be easily moved within a factory, so production efficiency can be increased by efficient cell relocation. If maintenance or replacement of a specific subline is required, there is an advantage that production can be continuously performed without interruption by adding an extra subline and performing maintenance. According to an embodiment parallel cell based mobility production system, it is possible to respond to various customer needs by producing vehicles of various types together when producing vehicles. Further, it is possible to continue productions and increase production without shutdown or separate construction even in the event of an abnormal situation during the process. Still further, it is possible to secure flexibility by setting the operation of each cell individually. Still further, it is possible to rapidly change production facilities according to rapid changes in product cycles, and to install and operate in factories inside buildings in downtown rather than outside factories. Although shown and described in relation to specific embodiments of the present invention, it will be obvious to a person of ordinary knowledge in the art that the present invention can be variously improved and changed within the scope of the technical spirit of the present invention provided by the following claims. | 13,058 |
11858573 | DETAILED DESCRIPTION OF THE INVENTION Referring to the figures, wherein like numerals indicate like or corresponding parts throughout the several views, a steerable drive wheel assembly according to a first exemplary embodiment of the invention is generally shown at30inFIGS.1-17. Generally stated, the drive wheel assembly30is composed of three interacting sub-assemblies or components: an outer housing32, an intermediate suspension module34and a drive module36. Each component will be described in turn. The outer housing32is both a structural member for the assembly30as well as an exterior shell within which is defined an interior space used to shelter, at least partially, the intermediate suspension module34and drive module36components. The structural attributes of the outer housing32arise from the fact that the assembly30attaches to a cart or other wheeled object through the outer housing32. For example,FIGS.3-6depict two steerable drive wheel assemblies30joined to a lift cart38via their respective outer housings32. The outer housing32includes a top40, which may take any one of many different forms. Given the structural demands required of the outer housing32, the top40may be fabricated form a thick plate steel or other sturdy material. In the examples provided, the top40is generally flat and its shape is generally rectangular. In this generally rectangular form, the top40can be seen having opposed front and rear edges, along with opposed left42and right44edges. The front and rear edges can be seen to have some contour, whereas the left42and right44edges are more or less straight. Of course, these shape details are highly variable, and could be modified to suit any desired shape of the top40, including round, oval, hexagonal, etc. Optionally, the top40may be fitted with one or more hoist anchors48.FIG.1shows four such hoist anchors46. Hoist anchors46are provided to conveniently hoist the assembly30for installation and maintenance. A right stabilizer arm48extends perpendicularly from the right edge44of the top40. Similarly, a left stabilizer arm50extends perpendicularly from the left edge46of the top40. The right48and left50stabilizer arms are sturdy, rigid elements made from steel or other sufficiently strong material. Optionally, each stabilizer arm48,50may include an external pass-through service window52, for purposes to be described subsequently. Although not visible inFIGS.1and2, for purposes of clarity, a front panel54extends perpendicularly from the front edge of the top40and directly connects each of the left50and right48stabilizer arms. Similarly, a rear panel56extends perpendicularly from the rear edge of the top40and directly connects each of the left50and right48stabilizer arms. The right54and/or left56panels can be seen in at leastFIGS.6and8-12. With the panels54,56secured to stabilizer arms48,50and these all joined to the top40, a monolithic structure is formed having substantial structural integrity. That is, the panels54,56link the stabilizer arms48,50into a robust, box-like configuration that is capable of maintaining its integrity under all foreseeable combinations of vertical, lateral and torsional loading. As mentioned previously,FIGS.3-6depict one exemplary application of the drive wheel assembly30in the context of an industrial lift cart38. Those of skill in the art will know that lift carts38can take many different forms as may be dictated by its intended purpose. In the examples shown, the lift cart38has a simple tubular frame supporting corner-mounted casters58. A pair of fork tubes60are securely attached within the frame. The fork tubes60enable the lift cart38to be easily raised and repositioned by a forklift (not shown). In this example, outriggers62extend from the outer housing32and lock onto the fork tubes60. In this way, the outriggers62can be seen as optional extensions of the outer housing32and serve as a special attachment feature for this type of lift cart38application. In other applications, it may be preferable to bolt the top40directly to the wheeled object to which the drive wheel assembly30is incorporated. Perhaps any wheeled object can be fitted with one or more drive wheel assemblies30to achieve steerable drive capability. The invention is described for use in industrial and/or educational robotic settings, however these are only examples and not to be construed as limiting. The intermediate suspension module34is disposed at least partially within the sheltered interior space of the outer housing32. That is to say, the intermediate suspension module34is located below the top40and in-between the left50and right48stabilizer arms, where it is protected. The intermediate suspension module34includes a suspension plate64disposed directly below the top40of the outer housing32. The suspension plate64may be generally flat, and have a shape that corresponds, more or less, to the shape of the top40, although smaller. That is, the suspension plate64may have a generally rectangular shape, although conformity to a classic geometry is not actually relevant. In this way, it can be seen that the suspension plate64has opposing front and rear edges that correspond, at least somewhat, to the respective front and rear edges of the top40. Also, the suspension plate64has opposing left66and right68edges corresponding respectively to the left42and right44edges of the top40. A right leg70extends perpendicularly from the right edge68of the suspension plate64. A left leg72extends perpendicularly from the left edge66of the suspension plate64. The intermediate suspension module34can be seen to take the appearance of a smaller version of the outer housing32(minus the panels54,56), with the suspension module34nested inside the outer housing32. In this manner, leg stabilizer arm50is parallel to and lies just outside of the left leg72. And likewise, the right leg stabilizer arm48is parallel to and lies just outside of the right leg70. The left66and right68legs each include an interior pass-through window74, as can be clearly seen inFIGS.1and2. The interior pass-through windows74at least partially, but preferably substantially, overlaps the exterior pass-through service windows52of the respective the left50and right48stabilizer arms to provide direct access to the drive module36within the sheltered interior space of the outer housing32. Thus, some light maintenance and inspections work can be accomplished through the overlapping service windows52,74. At least one left linear guide bearing assembly is operatively disposed between the outer housing32and the intermediate suspension module34. And likewise, at least one right linear guide bearing assembly operatively disposed between the outer housing32and the intermediate suspension module34. More specifically, in the illustrated examples two left linear guide bearing assemblies are disposed between the left stabilizer arm50and the left leg, and two right linear guide bearing assemblies are disposed between the right stabilizer arm48and the right leg. Each linear guide bearing assembly includes a rail76fixedly attached to the outside facing surface of the respective leg70,72. Each rail76is fabricated from metal or some other sufficiently durable material. Furthermore, each linear guide bearing assembly includes a channel78fixedly attached to the inside facing surface of the respective stabilizer arm48,50. Preferably, but not necessarily, the channels78are fabricated from a polymeric material to provide good lubricity for a sliding interface. The rails76and mating channels78are shown having a dovetail fit configuration, however other interlocking and non-interlocking shapes are certainly possible. And of course, the attachment points of the rails76and channels78could be reversed, such that the channels78attach to the legs70,72and the rails76to the arms48,50. As can be seen inFIG.2, the linear guide bearing assemblies are spread apart as far as possible on each leg70,72to provide maximum stability. In cases where additional stability is needed, three or more linear guide bearing assemblies may be used between each leg70,72and arm48,50. The linear guide bearing assemblies establish controlled sliding interfaces between the outer housing32and intermediate suspension module34. Thus, when the outer housing32is securely attached to a lift cart38or some other wheeled object, the intermediate suspension module34is able to be raised and lowered into and out of the sheltered interior space of the outer housing32. Guided linear extension and retraction of the intermediate suspension module34relative to the outer housing32can perhaps best be observed by comparingFIGS.9and10. In these illustrations, the drive module36, which is carried inside the intermediate suspension module34, can be see raised about the floor surface inFIG.9, and then lowered into contact with the floor surface inFIG.10. This up and down movement is facilitated by the linear guide bearing assemblies. The drive wheel assembly30further includes at least one biasing member80operatively disposed between the outer housing32and the intermediate suspension module34. In the illustrated examples, four biasing members80are provided. The purpose of the biasing members80is to urge downward vertical relative movement of the intermediate suspension module34relative to the outer housing32in cooperating alignment with the linear guide bearing assemblies, and thereby improve floor traction for the drive module36. In this context, the biasing members80can be generally understood as springs which, in the illustrated examples, are operatively and strategically disposed between the top40of the outer housing32and the suspension plate64of the intermediate suspension module34. In the example ofFIGS.1-17, the biasing members80are configured as double-acting pneumatic air cylinders attached about the four corners of the suspension plate64. Each pneumatic cylinder carriers a double-acting piston, which is attached to the top40of the outer housing32. Pressurized air, as from a source tank82(FIG.7), is routed to the bottom of the double-acting piston where the natural compressibility of air forms a spring that will urge separation between the intermediate suspension module34and the outer housing32, thus improving floor traction for the drive module36. The double-acting nature of the illustrated pneumatic cylinders is that pressurized air can alternatively be routed to the top of each double-acting piston, in which case the intermediate suspension module34will be retracted into the outer housing32, causing the drive module36to lift away from the floor by distance L as depicted inFIGS.6&9. When the drive module36is thus lifted away from the floor, the lift cart38or other wheeled object to which the assembly30is attached may be free-wheeled without resistance or interaction of the drive wheel assembly30. Either alternatively to, or in conjunction with, double-acting pneumatic air cylinders80, one or more retractor springs84may be operatively disposed between the outer housing32and the intermediate suspension module34, as best seen inFIGS.1and2. That is to say, a single-acting pneumatic cylinder could be substituted for the double-acting type and accomplish the aforementioned lifting of the drive module36away from the floor with the aid of retractor springs84. The one or more retractor springs84are configured to counteract the constant traction-oriented biasing function of the biasing members80. Thus, in the exemplary case of double-acting pneumatic air cylinders80like those shown throughoutFIGS.1-17, the retractor springs84supplement the lifting action generated by the air when raising the drive module36out of contact with the floor. And in the alternative case of single-acting pneumatic air cylinders80(not illustrated), the retractor springs84would provide the sole and exclusive energy needed to lift the drive module36out of contact with the floor. In this latter case, of course, the normal downward pressure generated by single-acting pneumatic air cylinders80would be required to overwhelm the retractor springs84in order to accomplish the desired floor traction in normal use of the drive wheel assembly30. As will be more fully described further below, the examples ofFIGS.18-22depict embodiments in which the biasing members80′ are shown in the form of coil compression springs. In these configurations, the retractor springs84would not be used. The drive module36is disposed below the intermediate suspension module34in an innermost sheltered region of the assembly30and, as previously mentioned, vertically moveable with the intermediate suspension module34relative to the outer housing32. More specifically, the drive module36is nested inside intermediate suspension module34, directly below the suspension plate64and in-between the left72and right70legs. The drive module36has a base86disposed directly below the suspension plate64of the intermediate suspension module34. Although its configuration is widely variable to suit the circumstances, in the illustrated examples the base86is generally flat and generally rectangular. As measured on a diagonal, the base86is smaller than the narrowest area inside the intermediate suspension module34, such that the drive module36is free to rotate inside the intermediate suspension module34without restriction. The biasing members80are each operatively connected to the base86. In the case of the pneumatic cylinders ofFIGS.1-17, the connection is made to the underside of the base86. In the case of the coiled compression springs ofFIGS.18-22, the connection is made to the upper side of the base86′. The drive module36includes first and second drive subassemblies. Both of the first and second drive subassemblies are supported below the base86. For convenience, numbered elements of the first drive wheel assembly are distinguished by an “A” suffix, whereas numbered elements of the second drive wheel assembly are distinguished by an “B” suffix. Each drive subassembly includes a wheel88A,88B. To be clear, the wheel of the first drive wheel assembly is88A, and the wheel of the second drive wheel assembly is88B. The first88A and second88B wheels are supported in side-by-side orientation for independent rotation about a common horizontal axis H upon respective axles90A/B. In other contemplated embodiments (not illustrated), the wheel88A,88B could be supported on a common, unitary axle for independent rotation about the horizontal axis H. Each drive subassembly includes a dedicated drive motor92A/B. As is typical with most electric motors, each drive motor92A/B has an armature and a stator body. The armatures of each drive motor92A,92B are disposed for rotation along respective axes that are parallel to one another and parallel to the common horizontal axis H. In some contemplated embodiments (not shown), one or both drive motors92A,92B could be oriented so that their armatures are not parallel to the common horizontal axis H. However, certain space-saving advantages can be achieved by mounting the drive motors92A,92B so that their armatures are parallel to the horizontal axis H. Notable, this orientation allows for the stator body of the first drive motor92A to overlap, at least partially, the second wheel88B. And similarly, the stator body of the second drive motor92B can be mounted so as to overlap, at least partially, the first wheel88B. This double-overlapping configuration of the two, independently controlled drive subassemblies can be appreciated from examination of the several drawings figures. As a consequence, relatively large drive motors92A/B can be used to power the respective wheels88A/B in a remarkably condensed package. The first drive motor92A is operatively connected to the first wheel88A through a first transmission94A. And likewise, the second drive motor92B is operatively connected to its second wheel88B through a second transmission94B. The first94A and second94B transmissions can take many different forms, including meshing gears, friction plates, belt-and-pully arrangements, and the like. Direct drive arrangements are also possible, in which the transmission is effectively reduced to the mechanical coupling between armature and roller88A/B. However, the illustrations show the first94A and second94B transmissions in the exemplary form of chain and sprocket drivetrains, which history has proven to be both a relatively inexpensive and robustly reliable option. Turning next to the exploded view ofFIG.11, the assembly30can be seen to include a rotary bearing96, operatively disposed between the drive module36and the intermediate suspension module34. The rotary bearing96is also clearly visible inFIGS.9,10,19and22. The rotary bearing96enables rotational movement of the drive module36relative to the intermediate suspension module34about a generally vertical steering axis V. The steering axis V passes centrally through the assembly30, such that the first88A and second88B wheels will be equally laterally offset therefrom as perhaps best seen in the embodiment ofFIG.21. The rotary bearing96can be seen to reside in a horizontal plane or region in-between the suspension plate64and the base86. Thus, the plane of the rotary bearing96is a horizontal space perpendicular to and centered about the vertical steering axis V. The larger the diameter of the rotary bearing96, the greater stability will be provided against racking as between the drive module36and intermediate suspension module34. The rotatory bearing96is shown in the exemplary form of a double-stacked ball-type roller bearing, however other types of bearing interfaces, including but not limited to all roller-types as well as sliding or plain bearing types, magnetic types, and fluid types are certainly possible depending on the application and suitability for the particular design parameters. Generally stated, in use when both motors92A,92B are energized to rotate in the (correspondingly) same direction at the same rate, the respective wheels88A,88B will also be turned in the same direction at the same rate causing the drive wheel assembly30to move in a straight line. To move the drive wheel assembly30in a curved line, both motors92A,92B are energized to rotate in the same direction but at different rates. This will cause one wheel88A or88B to turn faster than the other. The intermediate suspension module34can be made to pivot about the steering axis V by energizing the motors92A,92B to rotate in (correspondingly) opposite directions at the same rate. Precise angular movements can be accomplished by carefully limiting the angular rotations of each wheel88A/B. And of course, a wide variety of complex motions are possible through the strategic rotational control of the respective wheels88A,88B. Such precision control of the drive wheel assembly30depends on accurate control of the drive motors92A,92B. One of the key features of this invention pertains to its superior ability to accurately control the motion of the drive wheel assembly30. This is accomplished by a plurality of strategically deployed sensors—that is, by a strategic sensor array. Specifically, a first angular velocity sensor98A is operatively associated with the first drive motor92A. And a second angular velocity sensor98B is operatively associated with the second drive motor92B. These angular velocity sensors98A/B can be located in various convenient locations, including but not limited to at the rollers88A/B or along components of the transmissions94A/B. In the illustrated examples, however, the angular velocity sensors98A/B are disposed between the armature and the stator body of the respective drive motor92A/B, as shown inFIGS.9,10,19and22. Another member of the strategic sensor array is an angular position sensor100. The angular position sensor100is operatively disposed between the drive module36and the intermediate suspension module34.FIGS.9,10,19and22depict the angular position sensor100located in conjunction with the rotary bearing96. This convenient location is by no means the only available location in which to mount the angular position sensor100. In theory, the motion of the drive wheel assembly30can be adequately controlled by the two angular velocity sensors98A/B. Each angular velocity sensors98A/B tracks the instantaneous rotation of each wheel88A/B, from which can be computed liner velocity and also rotational position of the of the intermediate suspension module34. But in practice, rollers88A/B slip, floors are uneven and tread diameters get smaller. As a result, it has been found that precision control of the drive wheel assembly30requires real-time monitoring of the absolute angular position of the drive module36relative to the intermediate suspension module34. According to the principles of this invention, the motion of the drive wheel assembly30can be better controlled by this strategic sensor array, which includes the ability to assess the rotational position of the intermediate suspension module34, preferably in real-time, by the angular position sensor100. The drive motors92A/B and strategic sensor array98A/B,100require electrical signals provided by wired connections. To complicate matters, the drive motors92A/B and their angular velocity sensors98A/B are designed to swivel inside the outer housing32. And all of these elements92A/B,98A/B and100extend and retract relative to the outer housing32. This complex array of motions demands a careful and effective wire management strategy. Such wire management strategy is accomplished by way of a serpentine energy chain102which is best seen inFIGS.12,14and16. The wires conducting electrical signal to/from the various elements92A/B,98A/B and100are supported within the articulating conduit of the serpentine energy chain102. The particularly clever aspect of the serpentine energy chain102is in its placement generally co-planar with the rotary bearing96. That is, the serpentine energy chain102is disposed in the plane of the rotary bearing96. The illustrations depict the serpentine energy chain102laying entirely outside the rotary bearing96. However, in contemplated embodiments where the rotary bearing96is of a sufficiently large diameter, the serpentine energy chain102could be located entirely on the interior of the rotary bearing96. The serpentine energy chain102comprises a plurality of jointed conduit segments fixed at an outer end thereof to the outer housing32and at an inner end to the intermediate suspension housing34. By viewingFIGS.12,14and16in rapid sequence, it can be observed that the outer end of the serpentine energy chain102remains stationary, which coincides with its connection to the outer housing32. By comparison, it can also be observed that the inner end of the serpentine energy chain102rotates together with the intermediate suspension housing34. The linked body in-between these two ends of the serpentine energy chain102wrap and unwrap around the rotary bearing96like a snake. This articulating conduit safely manages the electrical wires so that electricity and signals can be continuously provided to/from the several critical elements92A/B,98A/B and100. Moreover, by positioning the serpentine energy chain102generally co-planar with the rotary bearing96, maximum operating efficiency and efficacy can be achieved in a small package. Before leavingFIGS.1-17, it bears mentioning that an optional sweeper or bumper bar structure104may be extended like a low-hanging skit from the intermediate suspension module34. The bumper bar structure104shown in the accompanying illustrations has a generally octagonal configuration when viewed from above. The bumper bar structure104provides a measure of additional structural rigidity to the intermediate suspension module34, and also surrounds the wheels88A/B so as to push obstructions encountered during travel out of the way. Turning now toFIGS.18-19, the steerable drive wheel assembly30′ is shown in the context of a second embodiment. In this example, the outer housing32includes a riser block106fixedly attached to the top40. In some applications, it may be advisable to provide alternative mounting options as needed to suit the circumstances. This embodiment, together with the embodiment ofFIGS.3-6, will enable those of skill in the art to appreciate that add-on mounting features, like outriggers62and riser blocks106, are easily adopted for use with the assembly30′. And also, the embodiment ofFIGS.18-19utilized coil compression springs for the biasing members80′. Because of the added space afforded by the riser block106, relatively long coil compression springs can be used if they are permitted to pass through the top40. InFIGS.20-22, the steerable drive wheel assembly30″ is shown in the context of a third embodiment. This third embodiment is similar in most respects to the first embodiment ofFIGS.1-17, however coil compression springs are used for the biasing members80″ instead of pneumatic cylinders. The drive wheel assembly30,30′,30″ of the present invention is ideally suited for use in all types of motorized objects and carts, particularly in industrial and/or educational robotics applications. Of course, these are merely examples of the many possible applications of the principles of this invention. The drive wheel assembly30,30′,30″ provides both motive force and directional control in a compact package. Due to the unique design, the drive wheel assembly30,30′,30″ can be manufactured at low-cost and with low weight, because a dedicated steering motor is not needed (as in swerve drive systems). However, the drive wheel assembly30,30′,30″ is exceptionally powerful for its small size owing to the use of two tractive motors92A,92B simultaneously driven through respective transmissions94A,94B. That is, the assembly30,30′,30″ uses, in total, two drive motors92A,92B which together provide both steering and tractive functionality. Thus, the utilization rate of all motors92A,92B in the assembly30,30′,30″ is effectively 100% at all times. The drive wheel assembly30,30′,30″ is highly maneuverable, given the independent drive control of each wheel88A,88B, which inherently enables straight tracking with ease. The drive wheel assembly30,30′,30″ is agile, robust and adaptable to nearly any conceivable application. The open frame construction with optional overlapping service windows52,74makes the drive wheel assembly30,30′,30″ easily serviceable. And the drive wheel assembly30,30′,30″ can be easily scaled up or down to suit the application. Overall, the drive wheel assembly30,30′,30″ overcomes most or all disadvantages inherent in prior art steerable drive wheel designs. The drive wheel assembly30,30′,30″ may be designed using different speed and position control strategies. The strategic sensor array98A/B,100. . . The large number of electric components (motors and sensors) require a larger number of electrical wires capable of moving with the intermediate suspension module34and drive module36. Therefore, the management of electric wires requires careful handling due to the rotational characteristics of the drive module36supported in the intermediate suspension module34. The system assembly30,30′,30″ includes an articulated wire harness for this purpose, in the form of a serpentine energy chain102that wraps and unwraps around the periphery of (alternatively inside) the rotary bearing96. The drive wheel assembly30,30′,30″ can be used in many different and various kinds of industrial applications. Motorized carts can take many different forms. One exemplary application for this alternative drive wheel assembly30,30′,30″ is the lift cart38ofFIGS.3-6. Depending on the configuration of the lift cart38, one or more drive wheel assemblies30,30′,30″ can be attached to provide steerable motive power thereby negating or augmenting the need for a forklift. Similarly, any wheeled object can be enhanced by the addition of one or more drive wheel assemblies30,30′,30″. In operation, an operator interacts remotely via a joystick or other type of steering control device (not show) to send directional and speed commands to the drive wheel assembly30,30′,30″. Such commands may also include raise and lower directives if the drive wheel assembly30is fitted with lifting capability such as by double-acting pneumatic cylinders. With such commands issued, the one or more drive wheel assemblies30,30′,30″ will cause the lift cart38or other wheeled object to move in the intended direction and desired speed. The foregoing invention has been described in accordance with the relevant legal standards, thus the description is exemplary rather than limiting in nature. Variations and modifications to the disclosed embodiment may become apparent to those skilled in the art and fall within the scope of the invention. | 28,891 |
11858574 | DESCRIPTION OF EMBODIMENTS In the present specification, a “rear wheel steering amount” includes both the rear wheel steering angle itself and the amount of change with respect to the rear wheel steering angle. Hereinafter, embodiments of the invention will be described using the drawings. First Embodiment FIG.1is an overall schematic configuration diagram of a vehicle to which a steering control device of a first embodiment according to an embodiment of the invention is applied. As illustrated inFIG.1, a vehicle100is a four-wheel steering (4WS) type vehicle capable of steering both front wheels6and rear wheels7. The vehicle100includes a steering control device1that transmits commands to each control unit such as a front wheel steering angle control unit12that drives and controls an actuator26and a rear wheel steering angle control unit15that drives and controls an actuator28via a communication line, a vehicle state sensor2that acquires motion state information of the vehicle100, and a communication line that transmits a signal from the vehicle state sensor2to the steering control device1or each control unit. The actuator26includes a front wheel power steering device13. The actuator28includes a rear wheel power steering device16. A braking device (not illustrated), a vehicle drive system, and the like are included in the actuator. As the actuator, a hydraulic type or an electric type can be used. The control unit includes a brake control unit and a drive torque control unit (not illustrated) in addition to the front wheel steering angle control unit12and the rear wheel steering angle control unit15described above. The front wheel power steering device13includes the steering wheel4, the steering sensor5such as a torque sensor for detecting the steering direction and the torque from the steering wheel4and a steering angle sensor for detecting a steering angle, a rack shaft25that is connected to the front wheel6by the link, the actuator26that applies thrust to the rack shaft25, and the front wheel steering angle control unit12that gives a command to the actuator26based on the detection value of the steering sensor5. The rear wheel power steering device16includes a rack shaft27connected to the rear wheel7via a link, an actuator28for applying thrust to the rack shaft27, and the rear wheel steering angle control unit15that gives a command to the actuator28based on the command from the steering control device1. The front wheel power steering device13is configured to generate thrust by the actuator26based on the torque and/or steering angle that is generated when the driver steers the steering wheel4and is detected by the steering sensor5, and to assist the driver's input to steer the front wheel6. The front wheel power steering device13can also use a steer-by-wire system in which the actuator26is independent of the driver's operation. The steering control device1gives the steering angle command to the front wheel steering angle control unit12based on the information of the steering wheel4, the steering direction from the steering wheel4, and the steering sensor5such as the torque sensor that detects the torque and the steering angle sensor that detects the steering angle. Since it is a steer-by-wire system, the command is given independently of the driver's operation. On the other hand, the rear wheel power steering device16is configured to generate thrust by the actuator28and steer the rear wheel7based on the command of the steering control device1independently of the steering of the driver's steering wheel4. In this embodiment, it is assumed that the left and right wheels of both the front wheels6and the rear wheels are steered by the same angle, but the left and right wheels (four wheels) of the front wheels6and the rear wheels7may be controlled in steering independently. Next, the processing procedure of the steering control device1will be described with reference to the flowchart and the operation example. FIG.2is an operation explanatory view of the steering control device1illustrated inFIG.1. As illustrated inFIG.2, when the vehicle100is traveling, the steering control device1receives the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. Then, the steering control device1outputs a predetermined steering angle command to the rear wheel steering angle control unit15based on the received front wheel steering angle and vehicle speed. The rear wheel steering angle control unit15outputs a torque command to the actuator28based on a predetermined steering angle command input from the steering control device1. The thrust generated by the actuator28changes the motion state of the vehicle100. Here, the actuator28is, for example, the above-mentioned rear wheel power steering device16or a rear wheel power steering motor. FIG.3is a block diagram of the steering control device1according to this embodiment. As illustrated inFIG.3, the steering control device1is configured by a reference rear wheel angle calculation unit17, a steering angular acceleration calculation unit18, and a rear wheel steering angle calculation unit19. The reference rear wheel angle calculation unit17, the steering angular acceleration calculation unit18, and the rear wheel steering angle calculation unit19are, for example, realized by a processor such as a CPU (Central Processing Unit) (not illustrated), a ROM for storing various programs, a RAM for temporarily storing data generated in the process of calculation, and a storage device such as an external storage device. The processor such as the CPU reads out and executes various programs stored in the ROM, and stores the calculation result that is an execution result in the RAM or an external storage device. Although the explanation is divided into functional blocks for the sake of clarity, the reference rear wheel angle calculation unit17, the steering angular acceleration calculation unit18, and the rear wheel steering angle calculation unit19may be combined into one calculation unit. Alternatively, the configuration may be such that two desired functional blocks of the reference rear wheel angle calculation unit17, the steering angular acceleration calculation unit18, and the rear wheel steering angle calculation unit19are integrated. The reference rear wheel angle calculation unit17forming the steering control device1receives the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. Then, the reference rear wheel angle calculation unit17calculates the reference rear wheel angle based on the received front wheel steering angle and vehicle speed, and outputs the calculated reference rear wheel angle to the rear wheel steering angle calculation unit19described later. The steering angular acceleration calculation unit18forming the steering control device1receives the front wheel steering angle detected by the steering sensor5. Then, the steering angular acceleration calculation unit18calculates the steering angular acceleration based on the received front wheel steering angle, and outputs the calculated steering angular acceleration to the rear wheel steering angle calculation unit19described later. The rear wheel steering angle calculation unit19forming the steering control device1determines the rear wheel steering amount based on the reference rear wheel angle input from the reference rear wheel angle calculation unit and the steering angular acceleration input from the steering angular acceleration calculation unit18. In other words, the rear wheel steering angle calculation unit19calculates the rear wheel steering angle based on the reference rear wheel angle and the steering angular acceleration. The steering angle of the rear wheel7is set smaller than the steering angle of the front wheel6. Here, instead of the vehicle speed (the speed of a vehicle) input to the reference rear wheel angle calculation unit17, the vehicle wheel speed of each wheel may be detected and input to the reference rear wheel angle calculation unit17. Next, a detailed processing procedure of the steering control device1according to this embodiment will be described with reference toFIGS.4to9. FIG.4is a flowchart illustrating an operation flow of the steering control device1according to the first embodiment. As illustrated inFIG.4, in Step S11, the reference rear wheel angle calculation unit17forming the steering control device1calculates the reference rear wheel angle based on the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. When the vehicle speed range is equal to or less than a certain threshold, the rear wheels7are steered in the opposite phase of the front wheels6, and when the vehicle speed range is equal to or more than a certain threshold, the rear wheels7are controlled to be in the same phase as the front wheels6. If the vehicle speed is constant and equal to or more than a certain threshold, the waveform of the rear wheel reference angle often has a relationship of similar waveform to the front wheel steering angle which is input information. In Step S12, the reference rear wheel angle calculation unit17determines whether the rear wheel reference angle is in phase with the front wheel steering angle (in-phase control). As a result of the determination, if the rear wheel reference angle is in phase with the front wheel steering angle, the process proceeds to Step S13. On the other hand, as a result of the determination, when the rear wheel reference angle is in opposite phase of the front wheel steering angle, the process ends with the calculated reference rear wheel angle as a command value given to the rear wheel steering angle control unit15. In Step S13, the steering angular acceleration calculation unit18forming the steering control device1calculates the front wheel steering angular speed ω (=dδf/dt) and the front wheel steering angular acceleration ω′ (=d2δf/dt2) using the absolute value δf of the front wheel steering angle (in this specification, the front wheel steering angular acceleration, which is the second derivative of the absolute value δf of the front wheel steering angle, is noted as ω′ for convenience). By using the absolute value δf of the front wheel steering angle, both the steering angular speed and the steering angular acceleration at the start of turning take positive values for both left and right turns. In Step S14, the rear wheel steering angle calculation unit19forming the steering control device1adjusts the amount of change in the rear wheel steering angle or the gain to be applied to the reference rear wheel angle based on the amount of change in the front wheel steering angle and the positive/negative values of the front wheel steering angular acceleration ω′ calculated by the steering angular acceleration calculation unit18in Step S13. Since the purpose is to control the rear wheel7in phase with the front wheel6, when multiplying the reference rear wheel angle by the gain, a positive value including zero (0) is used. Here, the adjustment of the amount of change in the rear wheel steering angle based on the positive/negative values of the front wheel steering angular acceleration ω′ will be described with reference toFIG.5.FIG.5is a diagram illustrating a temporal change of an absolute value δf of a front wheel steering angle and a diagram illustrating a temporal change of a rear wheel steering angle δr, in which a temporal change of a reference rear wheel angle and a temporal change of an actual rear wheel angle δre illustrated. The temporal change of the absolute value δf of the front wheel steering angle illustrated in the upper part ofFIG.5is the waveform of the absolute value δf of the front wheel steering angle used by the steering angular acceleration calculation unit18to calculate the front wheel steering angular acceleration ω′ in Step S13. In the lower part ofFIG.5, the dotted line represents the temporal change of the reference rear wheel angle, and the solid line represents the actual rear wheel angle. As illustrated in the lower part ofFIG.5, the rear wheel steering angle calculation unit19adjusts the rear wheel steering angle δr to be the actual rear wheel angle illustrated by the solid line based on the amount of change in the front wheel steering angle and the positive/negative values of the front wheel steering angular acceleration ω′ calculated by the steering angular acceleration calculation unit18in Step S13. Next, the details of Step S14will be described with reference toFIGS.6and7.FIG.6is a diagram illustrating the temporal change of the absolute value of the front wheel steering angle, in which a first steering section where the absolute value of the front wheel steering angle increases and a second steering section where the absolute value of the front wheel steering angle decreases and/or becomes constant are illustrated.FIG.7is a diagram illustrating changes over time in rear wheel steering angle, front wheel steering angular acceleration, and gain. As illustrated inFIG.6, the section where the absolute value δf of the front wheel steering angle increases is referred to as the first steering section, and the section where the absolute value δf of the front wheel steering angle decreases and/or becomes constant is referred to as the second steering section. The temporal change of the absolute value δf of the front wheel steering angle illustrated in the upper part ofFIG.6is represented as a waveform in which the absolute value δf of the front wheel steering angle increases in the first steering section, and the absolute value δf of the front wheel steering angle decreases in the second steering section. Such a waveform (profile) corresponds to, for example, a steering operation in the case of performing emergency avoidance, and corresponds to a state in which the steering is immediately returned after turning the steering to a different lane. Further, the temporal change of the absolute value δf of the front wheel steering angle illustrated in the lower part ofFIG.6is represented as a waveform in which the absolute value δf of the front wheel steering angle increases in the first steering section, the absolute value δf of the front wheel steering angle is constant for a predetermined period in the second steering section, and then the absolute value δf of the front wheel steering angle decreases. Such a waveform (profile) corresponds to, for example, a steering operation while traveling on a ramp on a highway. Since the highway ramp is a curve with a constant radius, the steering is maintained for a certain period of time, after which the steering is returned. It corresponds to the waveform at this time. That is, the section where the absolute value δf of the front wheel steering angle is constant in the lower part ofFIG.6corresponds to the state in which the above-mentioned steering state is maintained for a certain period of time. As illustrated inFIG.7, the rear wheel steering angle calculation unit19forming the steering control device1adjusts the amount of change in the rear wheel steering angle δr based on the positive/negative values of the front wheel steering angular acceleration ω′ in the first steering section (the temporal change of the rear wheel steering angle δr in the upper part ofFIG.7), and increases the amount of change in the rear wheel steering angle δr when the front wheel steering angular acceleration ω′ is negative compared to the case when the front wheel steering angular acceleration ω′ is positive. When the gain is used, it is multiplied by the reference rear wheel angle (the dotted line illustrated in the upper part ofFIG.7) and the gain value is adjusted as illustrated in the lower part ofFIG.7. As a result, the rear wheel steering angle δr (the solid line illustrated in the upper part ofFIG.7) at the start of turning (steering) becomes smaller than the reference rear wheel angle (the dotted line illustrated in the upper part ofFIG.7), so the force generated by the rear wheel7becomes smaller and the amount of restoring yaw moment that suppresses the rotating motion of the vehicle100is also reduced. As a result, even in the vehicle100in which the four wheels are controlled, the yaw rate γ rises faster and the deterioration of the turning responsiveness can be suppressed. Further, by using the front wheel steering angular acceleration ω′, it is possible to cope with a sudden increase in steering during steering. Here,FIG.8is a diagram illustrating the temporal change of the rear wheel steering angle δr and the temporal change of the yaw rate γ, in which the temporal change of the actual rear wheel angle together with the temporal change of the reference rear wheel angle δre illustrated respectively. As illustrated in the solid line waveform in the lower part ofFIG.8, it can be seen that even in the vehicle100in which the four wheels are controlled, the yaw rate γ rises faster and the deterioration of the turning responsiveness can be suppressed. In the first steering section, the amount of change (gain) of the rear wheel steering angle δr may be adjusted based on the front wheel steering angle, the front wheel steering angular speed ω, the front wheel steering angular acceleration ω′, and the like within a range, where the positive/negative values of the steering angular acceleration are matched. At that time, in order to ensure that the turning performance is not deteriorated at the initial stage of turning (steering), the amount of change (gain) of the rear wheel steering angle δr also changes to be larger as the region changes from a region where the absolute value δf of the front wheel steering angle is small to a region where the absolute value is large. Further,FIG.9illustrates the temporal change of the rear wheel steering angle δr and the temporal change of the front wheel steering angular acceleration ω′. As illustrated inFIG.9, the rear wheel steering angle δr may be controlled from a state in which the front wheel steering angular acceleration ω′ in the first steering section is negative, or from the second steering section. By performing such control, the first half of the first steering section (the section in which the front wheel steering angular acceleration ω′ is positive) becomes 2WS, and the characteristics of the vehicle100are directly reflected in the behavior. When dividing the first steering section into two regions, the yaw rate γ and a lateral acceleration Gy acquired from the vehicle state sensor2in combination with or in place of the front wheel steering angular acceleration ω′, or the temporal change rate (time derivative) of their physical quantities may be used. Considering the delay of the actuator, the delay and accuracy of the vehicle state sensor2, and the delay of the yaw rate γ and the lateral acceleration Gy with respect to the steering angle, it is considered that the most suitable physical quantity for grasping the vehicle condition in the future is the steering angle. In the second steering section, the section in which the absolute value δf of the front wheel steering angle illustrated inFIG.6is constant and/or decreases is made to follow the rear wheel steering angle δr. That is, it is equivalent to determining the rear wheel steering angle δr by multiplying the front wheel steering angle by a constant gain. An operation example of the waveform of the rear wheel steering angle δr and the gain applied to the rear wheel reference angle in the second steering section is as illustrated inFIG.7. Since the waveforms of the front wheel steering angle and the rear wheel steering angle δr are similar and no phase difference occurs, the stability of the vehicle100at the end of turning (steering) is improved. FIG.10is a diagram illustrating the temporal change of the absolute value δf of the front wheel steering angle, in which the temporal average value of the amount of change in the rear wheel steering angle in the first steering section and the second steering section are illustrated. As illustrated inFIG.10, in order to ensure non-deterioration of responsiveness at the initial stage of turning (steering) and steering stability of the vehicle100, the temporal average value of the amount of change in the rear wheel steering angle δr in the first steering section for any front wheel steering pattern is needed to be smaller than the temporal average value of the amount of change in the rear wheel steering angle δr in the second steering section. Here, the temporal average value of the steering angle amount of change is the amount obtained by dividing the value obtained by time integration by the time. The rear wheel steering angle calculation unit19transmits the rear wheel steering angle calculated in Step S14inFIG.4to the rear wheel steering angle control unit15as a command value via the communication line, whereby the process of the steering control device1ends. As described above, according to this embodiment, it is possible to provide a steering control device and a steering control method that can suppress the deterioration of the turning responsiveness at the initial stage of steering that may occur in the four-wheel steering vehicle, and improve the steering stability when the front and rear wheels of the four-wheel steering vehicle are controlled in the same phase. Second Embodiment FIG.11is a block diagram of a steering control device1aof a second embodiment according to another embodiment of the invention. In the above-described first embodiment, the steering control device1is configured by the reference rear wheel angle calculation unit17, the steering angular acceleration calculation unit18, and the rear wheel steering angle calculation unit19, but is different from this embodiment in that the steering control device1ais configured by a steering angular speed calculation unit18aand a rear wheel steering angle calculation unit19a. The other configurations of the vehicle100are the same as those of the first embodiment. As illustrated inFIG.11, the steering control device1aaccording to this embodiment is configured by the steering angular speed calculation unit18aand the rear wheel steering angle calculation unit19a. The steering angular speed calculation unit18aand the rear wheel steering angle calculation unit19aare, for example, realized by a processor such as a CPU (not illustrated), a ROM for storing various programs, a RAM for temporarily storing data generated in the process of calculation, and a storage device such as an external storage device. The processor such as a CPU reads out and executes various programs stored in the ROM, and stores the calculation result that is an execution result in the RAM or an external storage device. Although the explanation is divided into functional blocks for the sake of clarity, the steering angular speed calculation unit18aand the rear wheel steering angle calculation unit19amay be used as one calculation unit. The steering angular speed calculation unit18aforming the steering control device1areceives the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. Then, the steering angular speed calculation unit18acalculates the front wheel steering angular speed based on the received front wheel steering angle, and outputs the calculated front wheel steering angular speed to the rear wheel steering angle calculation unit19a. The rear wheel steering angle calculation unit19aforming the steering control device1areceives the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. Then, the rear wheel steering angle calculation unit19acalculates the front wheel steering angular speed ω based on the received front wheel steering angle and the vehicle speed, and outputs the calculated front wheel steering angular speed co to the rear wheel steering angle calculation unit19adescribed later. The rear wheel steering angle calculation unit19aforming the steering control device1adetermines the rear wheel steering amount based on the front wheel steering angular speed ω input from the steering angular speed calculation unit18a. In other words, the rear wheel steering angle calculation unit19acalculates the rear wheel steering angle based on the front wheel steering angular speed ω. Next, a detailed processing procedure of the steering control device1aaccording to this embodiment will be described with reference toFIGS.12and13. FIG.12is a flowchart illustrating an operation flow of the steering control device according to this embodiment. As illustrated inFIG.12, in Step S21, the steering angular speed calculation unit18aforming the steering control device1acalculates the front wheel steering angular speed ω based on the front wheel steering angle detected by the steering sensor5and the vehicle speed included in the motion state information of the vehicle100detected by the vehicle state sensor2. In Step S22, the steering angular speed calculation unit18acompares the front wheel steering angle detected by the steering sensor5with the calculated positive/negative values of the front wheel steering angular speed ω, and if the positive/negative values are different, the process proceeds to Step S23. On the other hand, as a result of comparison, if the positive/negative values of the front wheel steering angle and the calculated front wheel steering angular speed ω match, the process ends.FIG.13is a diagram illustrating temporal changes of the front wheel steering angle, the front wheel steering angular speed, and the moment. As illustrated inFIG.13, it illustrates that the rear wheels are controlled only in the region within the dotted line. In Step S23, the rear wheel steering angle calculation unit19aforming the steering control device1amultiplies the front wheel steering angular speed ω by a proportional gain, calculates a value with a primary delay, and uses that value as an additional moment control amount. In Step S24, the rear wheel steering angle calculation unit19acalculates a required rear wheel steering amount (rear wheel steering angle) based on the additional moment control amount obtained in Step S23, and ends the process. Similar to the first embodiment described above, this embodiment is premised on controlling the rear wheels7in phase with the front wheels6. By performing such rear wheel steering, the restoring yaw moment acts on the vehicle100when the steering is returned, and the vehicle responsiveness and stability during turning (steering) are improved. Instead of the front wheel steering angle and the front wheel steering angular speed ω, the motion state information of the vehicle100acquired from the vehicle state sensor2may be used. For example, when the lateral acceleration and the lateral acceleration increasing rate of the vehicle100are used and the positive/negative values of the two are different, the restoring yaw moment becomes a value obtained by adding a proportional gain and a primary delay to the lateral acceleration increasing rate. As described above, according to this embodiment, in addition to the effect of the first embodiment, the restoring yaw moment acts on the vehicle when the steering is returned, and it is possible to improve the vehicle responsiveness and stability during turning (steering). Further, the invention is not limited to the embodiments described above, but includes various modifications. For example, the above embodiments have been described in detail for easy understanding of the invention, and the invention is not necessarily limited to having all the configurations described. In addition, some of the configurations of a certain embodiment may be replaced with the configurations of the other embodiments, and the configurations of the other embodiments may be added to the configurations of the subject embodiment. REFERENCE SIGNS LIST 1,1asteering control device2vehicle state sensor4steering wheel5steering sensor6front wheel7rear wheel12front wheel steering angle control unit13front wheel power steering control device14monitoring sensor15rear wheel steering angle control unit16rear wheel power steering device17reference rear wheel angle calculation unit18steering angular acceleration calculation unit18asteering angular speed calculation unit19,19arear wheel steering angle calculation unit25rack shaft26actuator27rack shaft28actuator100vehicle | 28,973 |
11858575 | DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the disclosure. FIG.1Ais a front elevation view of a bicycle rack100, according to an embodiment.FIG.1Bis a side elevation view of the bicycle rack100ofFIG.1A, according to an embodiment.FIG.1Cis a top (plan) view of the bicycle rack100ofFIGS.1A-1B, according to an embodiment. According to embodiments, and referring toFIGS.1A-1C, the bicycle rack100includes an upper pivot bar104configured to be fastened to a vertical structure106(such as a wall) in a horizontal orientation, the upper pivot bar104defining one or more upper mounting points108a,108b,108c, etc. A lower pivot bar110may be configured to be fastened to the vertical structure106parallel to and below the upper pivot bar104, the lower pivot bar110defining one or more lower mounting points112a,112b,112c, etc. The upper and lower mounting points and relationships therebetween may be visualized with reference toFIG.4.FIG.4is an oblique view400of the bicycle rack shown in earlierFIGS.1A-1C,2A-2C, and3A-3B, according to an embodiment. The one or more lower mounting points112a,112b,112cmay be configured for vertical alignment with respective ones of the one or more upper mounting points108a,108b,108c. FIG.2Ais a front elevation view200of a bicycle rack holding a bicycle202, according to an embodiment. The bicycle200may be loaded into the bicycle support structure114with the front wheel218of the bicycle vertical. This allows the user to easily mount the bicycle202because the bicycle may be lifted by the handlebars and placed directly in the wheel hoop116of the bicycle support structure114.FIG.2Bis a side elevation view200of the bicycle rack holding the bicycle202ofFIG.2A, according to an embodiment.FIG.2Cis a front elevation view201of the bicycle rack ofFIGS.1A-1C and2A-2Bholding a plurality of bicycles202, according to an embodiment. By inspection ofFIGS.2A-2C, one may see that whileFIG.2Ashows a bicycle202with its front wheel218straight,FIGS.2B and2Cshow the bicycle(s)202hanging from the bicycle support structure114with the front wheels218leaned to the right, as seen from the perspective ofFIG.2B. The bicycle202may be easily lifted into the bicycle support structure114with its front wheel218aligned vertically followed by turning the handlebar to lean the front wheel218to the right after the weight of the bicycle202is supported by the bicycle support structure114, as shown inFIGS.2B and2C. As may be appreciated by inspection ofFIG.2C, leaning the front wheels218of neighboring bicycles202one side allows neighboring bicycles202to be spaced more closely on the bicycle rack than if the front wheels218and handlebars remained in a “straight ahead” orientation. This may be used to increase the capacity of the bicycle rack. In other words, in an embodiment, a user may load a bicycle202into the bicycle support structure114, and specifically the wheel hoop116, with the front wheel218of the bicycle202in a vertical position, as shown inFIG.2A. The user may subsequently allow the front wheel218of the bicycle202to rotate to a bicycle storage position as illustrated inFIG.2C. As described and shown elsewhere herein, the wheel hoop116may include first and second hoop segments139,141that define between them an open segment140sized to allow the bicycle front wheel218to lean in a way that causes the handlebars of the bicycle202to rotate away from the handlebars of a neighboring bicycle202. In another embodiment, the wheel hoop116is formed as a closed shape or substantially closed shape including an outward bulge shaped to allow a bicycle front wheel and handlebars to rotate away from vertical without interfering with bicycle spokes, axle, or front fork (not shown). In an embodiment, the bicycle support structure114is configured to be coupled to a vehicle-mounted sports rack for carrying bicycles with the vehicle. FIG.5is a side elevation view500of a bicycle support structure coupled to upper and lower pivot bars, according to an embodiment. The bicycle rack includes a bicycle support structure114configured to be mounted to the upper pivot bar104and the lower pivot bar110and to support a bicycle202to hang from the bicycle support structure (e.g., seeFIGS.2A-2C). The bicycle support structure114may include a wheel hoop116for receiving a bicycle front wheel218. An upper support arm120may be coupled to the wheel hoop116at a first location122, the upper support arm120terminating in an upper coupler124configured to couple to an upper mounting point108on the upper pivot bar104. A lower support arm126may be coupled to the wheel hoop116at a second location128different from the first location122, the lower support arm126terminating in a lower coupler130configured to couple to a lower mounting point112on the lower pivot bar110. An L-bracket132may be coupled to the lower support arm126and to the wheel hoop116at a third location134different from the first and second locations122,128. The upper support arm120, L-bracket132, and lower support arm126and their respective points of attachment122,134,128may define a plane in which the wheel hoop116is supported relative to the upper and lower pivot bars104,110and provide for stable support of a bicycle202hung from the wheel hoop116. In an embodiment, the upper support arm120includes the upper coupler124coupled directly to the wheel hoop116at the first location122. In other words, the upper support arm120may be vestigial. The upper support arm120may consist essentially of the upper coupler124directly welded or otherwise coupled to the first location122on the wheel hoop116. In an embodiment, the upper support arm120may be continuous with the wheel hoop116such that the upper coupler124(and the upper support arm120, if more than vestigial) may be an end of a rod formed as at least a portion of the wheel hoop116. According to an embodiment, the lower support arm includes a first portion502continuous with the wheel hoop116and bent at an angle relative to a plane of a wheel-receiving portion of the wheel hoop116. According to an embodiment, the lower support arm126includes a first portion502coupled to the wheel hoop116and a second portion504coupled to a lower end of the first portion502, such that the lower coupler130is formed on or coupled to a lower end of the second portion506of the lower support arm126. The lower support arm may include an intermediate coupler506, such as a sleeve with set screws, configured to couple the first portion502of the lower support arm126to the second portion504of the lower support arm126. A lower end of the L-bracket132may be coupled to the intermediate coupler506(arrangement not shown). In another embodiment, the lower end of the L-bracket132is welded or otherwise coupled to the first portion502of the lower support arm126. In an embodiment, the wheel hoop116includes first and second hoop segments139,141that define an open segment140(seeFIG.1C) selected to allow a bicycle front wheel and handlebars to lean or rotate away from vertical without interfering with bicycle spokes, axle, or front fork. It will be recognized that a bicycle wheel placed within the wheel hoop116and resting against the L-bracket132may be tilted or rotated to the right, as viewed inFIG.1C(see alsoFIG.2C). In this position, the first and second hoop segments139,141contact and support the rim of the wheel, while no part of the wheel hoop116makes contact with the wheel spokes or axle, etc. In another embodiment, the wheel hoop116forms a closed or substantially closed shape including an outward bulge shaped to allow a bicycle front wheel and handlebars to lean or rotate away from vertical without interfering with bicycle spokes, axle, or front fork (arrangement not shown). In an embodiment, a user may load a bicycle202into the bicycle support structure114, and specifically the wheel hoop116, with the front wheel218of the bicycle202in a vertical position, as shown inFIG.2A. The user may subsequently allow the front wheel218of the bicycle202to rotate to a bicycle storage position as illustrated inFIG.2C. In an embodiment, the bicycle support structure114is configured to be at least occasionally coupled to a vehicle-mounted sports rack for carrying bicycles with the vehicle. The bicycle rack may include one or more rear tire stops138configured for fastening to the vertical structure106, each rear tire stop138being positioned to maintain a stable vertical orientation of a supported bicycle202. FIG.3Ais a top (plan) view of a bicycle rack in a position300with bicycle support structures pivoted away from perpendicular to upper and lower pivot bars, according to an embodiment.FIG.3Bis a front elevation view of the bicycle rack in the position300ofFIG.3A, according to an embodiment.FIG.4is an oblique view of the bicycle rack shown inFIGS.1A-1C,2A-2C, and3A-3B, according to an embodiment. The upper and lower couplers124,130cooperate with respective upper and lower mounting points108,112to enable pivoting of the bicycle support structure114relative to the upper and lower pivot bars104,110. The enabled pivoting of the bicycle support structure114relative to the upper and lower pivot bars104,110allows pivoting of a bicycle to provide access to a side of the bicycle even when other closely spaced bicycles are supported by the bicycle rack100. The pivoting motion of the bicycle support structure may allow bicycle support structure114to be stored closer to the vertical structure114than when the bicycle support structure114is disposed perpendicular to the upper and lower pivot bars104,110. The upper mounting points108a,108b,108cmay include apertures defined by the upper pivot bar104. The upper coupler124may include an upper flange508, and an upper rod510configured to slide through a selected one of the upper mounting points108a,108b,108csuch that the upper flange rests against the upper pivot bar104. In an embodiment the upper rod510is threaded. The bicycle rack may further include a friction nut136configured to be turned onto the threaded upper rod510and to exert a compression force on the upper pivot bar104, between the friction nut and the upper flange508. The friction nut136-exerted compression force may apply a damping of pivoting of the bicycle support structure114relative to the upper pivot bar104. A degree to which the pivoting is damped may be controlled by adjustment of a position of the friction nut136on the upper rod510; i.e., damping may be increased or decreased by tightening or loosening the friction nut. Similarly, the lower mounting points112a,112b,112cmay include apertures defined by the lower pivot bar110. The lower coupler130may include a lower flange512, and a lower rod514configured to slide through a selected one of the lower mounting points112a,112b,112c. In an embodiment, the lower rod514is threaded, and the bicycle rack may further include a friction nut136configured to be turned onto the threaded lower rod514and to exert a compression force on the lower pivot bar110, between the friction nut and the lower flange512. The friction nut136-exerted compression force may similarly control a damping of pivoting of the bicycle support structure114relative to the lower pivot bar110. According to embodiments, the upper and lower couplers124,130include respective upper and lower flanges508,512, threaded upper and lower rods510,514, and friction nuts136, provided for tightening against the upper pivot bar104and the lower pivot bar110, respectively. According to an embodiment, the upper and lower flanges508,512may be, for example, welded to the upper and lower couplers124,130, respectively. According to another embodiment, the upper and lower rods510,514have reduced diameters, relative to diameters of the upper and lower support arms120,126, and the upper and lower flanges508,512include washers positioned over the upper and lower rods against shoulders formed where the upper and lower support arms transition to the smaller diameters of the upper and lower rods. According to an embodiment, the wheel hoop116defines an open segment140(seeFIG.1C) selected to allow a bicycle front wheel and handlebars to lean or rotate away from vertical without interfering with bicycle spokes, axle, or front fork.FIGS.2B and2Cillustrate the bicycle front wheel218and handlebars in the leaned or rotated position. The open segment140of the wheel hoop116and corresponding enabled rotation of bicycle front wheel and handlebars may allow bicycles to be suspended more closely together than if the bicycle front wheels were required to be maintained in a vertical orientation, especially as seen inFIG.2C. As indicated above, a bulge (not shown) in a wheel hoop116may provide a function similar to the open segment140. The bicycle rack may further include one or more rear tire stops138configured for fastening to the vertical structure, each rear tire stop being positioned to maintain a stable vertical orientation of a supported bicycle202. According to an embodiment, the plurality of upper mounting points108a,108b,108cand lower mounting points112a,112b,112care spaced more closely than a horizontal extent of the wheel hoop116, such that not all mounting points108a,108b,108c,112a,112b,112cmay be simultaneously populated with respective upper and lower couplers124,130of different bicycle support structures114. This over-provisioning of upper and lower mounting points108a,108b,108c,112a,112b,112cmay be useful for adapting the bicycle rack to bicycles having differing dimensions and also for maximizing storage capacity of a given bicycle rack. For example, the plurality of upper and lower mounting points108a,108b,108c,112a,112b,112cmay be spaced relatively close together to allow the selection of a spacing between pairs of bicycle support structures114, depending upon which of the upper and lower mounting points are occupied by bicycle support structures, and the number of unoccupied mounting points between each pair of adjacent support structures. Thus, a user may accommodate a plurality of different suspended bicycle horizontal extents at closest spacing by positioning the bicycle support structures with different numbers of unoccupied upper and lower mounting points108a,108b,108c,112a,112b,112cbetween mounting points occupied by adjacent bicycle support structures. This enables the simultaneous storage of, e.g., children's bicycles and adults' bicycles with sufficient space between each for access, but with a minimum of wasted or unnecessary space. In other words, a plurality of bicycle support structures114may be coupled to particular ones of the plurality of upper and lower apertures108a,108b,108c,112a,112b,112cto provide a selected spacing between each pair of neighboring bicycles202. While various aspects and embodiments have been disclosed herein, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. | 15,373 |
11858576 | The drawings referred to in this description should be understood as not being drawn to scale except if specifically noted. DESCRIPTION OF EMBODIMENTS The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only embodiments in which the present invention is to be practiced. Each embodiment described in this disclosure is provided merely as an example or illustration of the present invention, and should not necessarily be construed as preferred or advantageous over other embodiments. In some instances, well known methods, procedures, and objects have not been described in detail as not to unnecessarily obscure aspects of the present disclosure. Terminology In the following discussion, a number of terms and directional language is utilized. Although the technology described herein is useful on a number of vehicles that have an adjustable saddle, a bicycle will be used to provide guidance for the terms and directional language. The term “seat tube” refers to a portion of a frame to which a dropper seatpost is attached. In general, a bicycle has a front (e.g., the general location of the handlebars and the front wheel) and a rear (e.g., the general location of the rear wheel). For purposes of the discussion the front and rear of the bicycle can be considered to be in a first plane. A second plane that is perpendicular to the first plane would be similar to an exemplary flat plane of the ground upon which the bicycle is ridden. In the following discussion, the pitch of the saddle refers to the exemplary horizontal plane drawn from the front of the saddle to the back of the saddle. For example, if the saddle is mounted to the dropper seatpost head with a zero-degree pitch, the front of the saddle and the back of the saddle would rudimentarily be in a horizontal plane having a parallel orientation with the exemplary flat plane of the ground as described above. An upward pitch of the saddle would occur when the saddle rotates about the dropper seatpost head such that the front of the saddle is higher (e.g., further from the ground plane) while the rear of the saddle is lower (e.g., closer to the ground plane). In an upward pitch scenario, the saddle plane would no longer be parallel with the flat plane of the ground but would instead intersect the ground plane at some location aft of the dropper seatpost head. In contrast, a downward pitch of the saddle would occur when the saddle rotates about the dropper seatpost head such that the front of the saddle is lower (e.g., closer to the ground plane) while the rear of the saddle is higher (e.g., further from the ground plane). In a downward pitch scenario, the saddle plane would no longer be parallel with the flat plane of the ground but would instead intersect the ground plane at some location forward of the dropper seatpost head. Overview The following discussion provides a novel solution for a dropper seatpost head that includes the ability to allow “infinite” (un-indexed) adjustment of the saddle's pitch. Further, embodiments reduce the dead length of the dropper seatpost while maintaining a consistent separation from a base of a saddle and the upper saddle rail clamping portion430of saddle clamp assembly400. The following discussion will describe conventional seatposts and limitations thereof. The discussion then turns to embodiments: the structure and function of the vehicle assembly along with a dropper seatpost having a user interface attached thereto, and a number of fastener types and orientations that are configurable for reducing the dead length of the dropper seatpost while also allowing adjustment to the pitch of the saddle. Embodiments described herein minimize or remove any fastener incursion that would reduce the standoff distance, between the upper saddle rail clamping portion and the bottom of the saddle, to less than the flex range of the saddle. Referring now toFIG.1, a bicycle is shown in accordance with an embodiment. In general, the bicycle includes pedals, wheels, a chain or other drive mechanism, brakes, an optional suspension, a saddle10(or bicycle seat), a handlebars200, a dropper seatpost300, and a bicycle frame119. In one embodiment, dropper seatpost300is a tube that extends upwards from the bicycle frame119to the saddle10. The amount that dropper seatpost300extends out of the frame can usually be adjusted. Dropper seatpost300may be made of various materials, such as, but not limited to being, the following: steel, aluminum, titanium, carbon fiber, and aluminum wrapped in carbon fiber. FIG.2depicts a handlebar200with a set of control levers205coupled therewith, according to an embodiment. The set of control levers205is a type of user interface with which the user employs for communicating dropper seatpost height instructions to the dropper seatpost. Of note, the set of control levers205is used herein to describe various embodiments. However, it should be understood that the term, “user interface” may be substituted for the set of control levers205, in various embodiments. It should also be appreciated that the user interface may be at least, but not limited to, any of the following components capable of communicating with the dropper seatpost: wireless device, power meter, heart rate monitor, voice activation device, GPS device having stored map, graphical user interface, button, dial, smart phone (e.g., iPhone™) and lever). The set of control levers205includes at least one control lever, such as the first control lever205A and may include a second control lever205B, it should be understood that in an embodiment, there may be only a single control lever, or in an embodiment there may be a set of control levers. For simplicity,205will be referred to as a set of control levers. The set of control levers205are mechanically and/or electronically connected (via wire/cable and/or wirelessly) to various components within the dropper seatpost. When the cyclist moves the set of control levers205, via the connections between the set of control levers205and the dropper seatpost, he is causing a cam within the dropper seatpost to shift positions. The shifting cam, in turn, moves against valves, causing the valves within a valve system to open and/or close. This opening and/or closing of the valves control the fluid movement through and surrounding the valve system. FIG.3is a perspective view of a dropper seatpost300coupled with a saddle clamp assembly400. In one embodiment, the dropper seatpost300includes an upper post310and a lower post315within which the upper post310telescopically slides upon actuation of a handlebar lever, such as the set of control levers205shown inFIG.2. In one embodiment, the dropper seatpost300includes an air valve333which is used to adjust the air pressure within dropper seatpost300. In one embodiment, saddle clamp assembly400is a two clamp dropper seatpost having two fasteners to maintain a clamping force between the upper clamp and lower clamp to hold onto saddle rails110(shown inFIG.6). In addition, the two fasteners are used to adjust the pitch of the saddle10, e.g., nose-up or nose-down. Further, saddle clamp assembly400is able to accommodate different seat-tube angles, different saddles, and different saddle pitch angles. As stated herein, the saddle pitch adjustment is important for personal rider preferences, different seat-tube angles, different saddle designs, and the like. In one embodiment, dropper seatpost300and at least part of saddle clamp assembly400are formed as a single component. In another embodiment, dropper seatpost300and saddle clamp assembly400consist of two or more distinct and/or different components. Further, dropper seatpost300and saddle clamp assembly400are formed of the same materials, formed of different materials, etc. The materials include a group of materials such as, but not limited to, a metal, a composite, a combination of both metal and composite parts within each part, and the like. The metal options include, but are not limited to, steel, aluminum, titanium, and the like. The composite materials include carbon-based composites, plastics, and the like. For example, an aluminum saddle clamp assembly400and an aluminum dropper seatpost300, a titanium saddle clamp assembly400and a carbon dropper seatpost300, a carbon saddle clamp assembly400and a titanium dropper seatpost300, a carbon saddle clamp assembly400and a steel dropper seatpost300, etc. Similarly, there can be other materials utilized such as carbon/metal mix (amalgamation, etc.) For example, saddle clamp assembly400consist of a carbon body with metal inserts, etc. Additional details regarding the operation of a dropper seatpost assembly is found in U.S. Pat. No. 9,422,018 entitled “Seatpost” which is assigned to the assignee of the present application, and which is incorporated herein by reference in its entirety. FIG.4is a perspective view of a plurality of different positions for dropper seatpost300shown in accordance with one embodiment. InFIG.4, dropper seatpost342is shown in full extension, dropper seatpost343is shown in partial extension, and dropper seatpost344is shown in full compression. In one embodiment, the dropper seatpost can be remotely shortened (lowered) using a control lever positioned on the bicycle's handlebar (as shown and described inFIG.2). On technical sections of a trail, a rider may cause the dropper seatpost to lower by triggering the actuating lever on the handlebar while the rider also depresses the saddle. Typically, the actuating lever of a dropper seatpost will open a valve or latch in the dropper seatpost so that the dropper seatpost can move up or down. In one embodiment, dropper seatposts have an air spring (mechanical spring, or the like) and use the rider's weight to move them down, and will only raise themselves when the valve or latch internal to the dropper seatpost is opened (via handlebar remote). In one embodiment, dropper seatposts are “microadjustable”. There are two types of microadjustable dropper seatposts: (1) dropper seatposts that can be continuously adjusted to an infinite number of positions; and (2) dropper seatposts that can only be adjusted to a predetermined (preprogrammed) number of positions. For example, with regard to dropper seatpost that can only be adjusted to a preprogrammed number of positions, the dropper seatpost adjustment positions may be that of the following three positions: up; middle; and down. Generally, the rider prefers that the dropper seatpost be in the “up” position during a ride over flat terrain, a road surface, or pedaling up small hills on a road surface. The rider generally prefers that the dropper seatpost be in the “middle” position when the rider still wants a small amount of power through pedaling but yet would still like the saddle to be at least partially out of the way. This situation may occur while riding down a gentle hill or when the rider anticipates having to climb a hill immediately after a short decent. The rider generally prefers that the dropper seatpost be in the “down” position when the rider is descending a steep hillside. In this situation, the rider would be positioned rearward of the saddle and essentially be in a mostly standing position. By doing such, the rider changes his center of gravity to be rearward of the bicycle and lower, thereby accomplishing a more stable and safer riding position. Additionally, since the saddle is lowered, it is not positioned in the riders' chest area, contributing to a safer ride. Some mountain bikers prefer that the infinitely adjustable dropper seatpost be installed on their mountain bikes, enabling them to adjust their saddle at any given moment to any given terrain detail. FIG.5is a perspective comparison view showing the difference in dead length between a conventional dropper seatpost clamp setup27and the reduced dead length dropper seatpost with saddle clamp assembly400, in accordance with an embodiment. As discussed inFIGS.1-4, dropper seatpost300is a height adjustable dropper seatpost that can be raised or lowered based on a user selection at a handlebar (or other location). In general, the overall manufacturing goal is to build a dropper seatpost300having saddle clamp assembly400with the most stroke for the lowest effective length. As shown inFIG.5, the upper post310of the dropper seatpost goes into the lower post315of the dropper seatpost. The stroke555is the exposed amount of the upper post310of the dropper seatpost300. In general, the stroke555is the distance from the line506indicative of the top of lower post315to the line502indicative of the bottom of the saddle clamp. The effective length (L1and L2respectively) is the length between the center axis (501and501A respectively) of the saddle rail and the bottom of whatever the largest diameter portion of the lower portion that stops in the seat tube (referred to herein as “seat tube collar525”) and shown by line503. During installation, the seat tube collar525is the lowest portion of the dropper seatpost300that is visible after it is installed into the bike frame119seat tube. In one embodiment, the working length on a dropper seatpost is identified by the total travel distance or stroke555. The dead length is the effective length of the dropper seatpost300in its dropped (or fully compressed) position. In other words, the distance between the center axis of the saddle rails and the bottom of the seat tube collar525when stroke555is reduced to effectively 0 mm in length. In one embodiment, the goal is to minimize the dead length. In the conventional dropper seatpost clamp setup27, the dead length is the distance between the center axis501A of the saddle rails and the bottom of the seat tube collar525(identified by line503), which is the effective length L1minus the stroke555. In contrast, in the dropper seatpost300having saddle clamp assembly400, the dead length is the distance between the center axis501of the saddle rails and bottom of the seat tube collar525(identified by line503), which is the effective length L2minus the stroke555. As can be seen inFIG.5, the difference in the two dead lengths is distance444. That is, the dead length of dropper seatpost300having saddle clamp assembly400is distance444less than the dead length of conventional dropper seatpost clamp setup27. As shown in the comparison ofFIG.5, embodiments described herein, reduce the distance from the center axis (501and501A respectively) of the saddle rails to the seat tube collar525, e.g., reduce the dead length, by altering the shape of the lower saddle rail clamping portion420and the upper saddle rail clamping portion430. In one embodiment, the dead length is additionally reduced by the change in the shape and/or orientation of clamping fasteners for the saddle clamp assembly400. For example, a rider wants to use a dropper seatpost300having a 150 mm stroke555. However, when the dropper seatpost300is at its most dropped position, there is still an amount of dead length (e.g., 20 mm). Further, the dead length is added to the travel length which means that at its fully extended position the dropper seatpost will be 170 mm above the seat tube on the bike frame119. In addition, there is also the size of the saddle10from the saddle rails110to the top of the saddle padded portion12(e.g., 30 mm). Thus, a rider wanting to use a dropper seat having a 150 mm stroke555may have a top of the saddle at 200 mm (150 mm travel+20 mm dead length+30 mm saddle height) above the seat tube. In some cases, this total distance of 200 mm will cause the rider to no longer be able to reach the pedals or be in a non-desired riding configuration. As such, the rider would have to use a shorter dropper seatpost having only a 100 mm total stroke. As can be seen in the comparison provided inFIG.5, the difference in dead length distance between the conventional dropper seatpost clamp setup27and saddle clamp assembly400is reduced by measurement444. That is, in conventional dropper seatpost clamp setup27the center axis501A of the saddle rail clamp is at a first distance L1. In contrast, the center axis501of saddle clamp assembly400(with the same stroke555) is at a lesser distance L2. This occurs due to the modification of the shape of lower saddle rail clamping portion420and upper saddle rail clamping portion430of the saddle clamp assembly400. In one embodiment, the lower saddle rail clamping portion420has been modified to “droop” down so that the distance between the lower saddle rail clamping portion420(when clamped) to the seat tube collar525(e.g., the dead length) is reduced. In other words, the dead length is reduced due to the altering of the shape of the lower saddle rail clamping portion420and upper saddle rail clamping portion430, while the stroke555, dropper seatpost300shape, and dropper seatpost300internals remain unchanged. The unchanged aspects would include one or both of fasteners805aand805bstill facing from the bottom to the top (as shown inFIGS.8A and8B). FIG.6illustrates an embodiment of a saddle10including a padded portion12for sitting, a first saddle rail110A, and a second saddle rail110b(collectively “saddle rails110”). The saddle10is for any vehicle that uses a saddle configuration such as, but not limited to, a bicycle, unicycle, tricycle, boat, or any type of vehicle that uses a saddle configuration. However, for purposes of clarity, the following discussion will utilize a bicycle for explanatory purposes. FIG.7is a front perspective view of a saddle clamp assembly400for coupling saddle10ofFIG.6to the bicycle frame119. Saddle clamp assembly400ofFIG.7is similar to the dropper seatpost and saddle clamp assembly400as discussion ofFIG.3. The saddle clamp assembly400is, for example, configured for coupling with saddle rails110shown inFIG.6. FIG.8Ais a side view of the saddle clamp assembly having two upward adjustable fasteners805aand805b, in accordance with an embodiment. For example, the upward adjustable fasteners could be, but is not limited to, two fasteners805aand805b, two wingnuts screws810A, a combination of a fastener805band a wingnut screw810A, and the like. FIG.8Bis a side view of the saddle clamp assembly having one upward adjustable fastener e.g., wingnut screw810A and one downward adjustable fastener810b, in accordance with an embodiment. In one embodiment, the fasteners can be of any sort that would work in the situation. For example, the upward adjustable fasteners could be, but is not limited to a fastener805a, a wingnut screws810A, a downward a combination of bolts805band wingnuts810A. FIG.8Cis a side view of the saddle clamp assembly having two downward adjustable fasteners815aand815b, in accordance with an embodiment. FIG.8Dis a side cutaway view of the saddle clamp assembly ofFIG.8Ahaving two downward adjustable fasteners815aand815b, in accordance with an embodiment. As shown inFIG.8B, the two fasteners817aand817bhave their heads in milled or otherwise manufactured small apertures466aand466bformed in upper saddle rail clamping portion430in order to keep any incursion of the fasteners817aand/or817bfrom moving above the plane established by upper saddle rail clamping portion430. In one embodiment,FIG.8Dalso includes a snap ring grove810which indicates the top of the hydraulic system that supports the dropper seatpost at the various travel locations. This allows the full length of the dropper post travel (e.g., the full 150 mm, etc.) FIG.9is a side view of the saddle clamp assembly400coupled with a saddle10, in accordance with an embodiment. Saddle clamp assembly400includes saddle rails110, lower saddle rail clamping portion420, upper saddle rail clamping portion430, and fasteners815aand815b. Saddle10includes saddle bottom912. As shown inFIG.9, saddle bottom912has a build in flex913with a max flex range911indicated by broken line. In one embodiment, although the dead length is reduced by the modification of the lower saddle rail clamping portion420and the upper saddle rail clamping portion430, e.g., the droop; the lowering of the lower saddle rail clamping portion420and the upper saddle rail clamping portion430will also reduce the distance between the bottom of the saddle and the top of the saddle rail clamp assembly (hereinafter “saddle bottom-to-dropper seatpost standoff distance920”). In one embodiment, the saddle bottom-to-dropper seatpost standoff distance920changes with the pitch setting when the bolt configuration remains in an upward direction (such as discussed inFIGS.8A and8B. In such an embodiment, the range of saddle flex913(e.g., an amount of flex built into the saddle10for performance, comfort, etc.) is mechanically reduced when a portion of a fastener, such as fastener805a(due to pitch adjustment of saddle10) sticks up far enough to reduce the saddle bottom-to-dropper seatpost standoff distance920to a distance that is less than the range of saddle flex913standoff distance930. In general, if the saddle bottom-to-dropper seatpost standoff distance920is less than the range of saddle flex913, it is possible that a flex of the saddle10will result in contact with the highest point in the saddle clamp assembly400causing a hard stop of the saddle10. This hard stop would be jarring, would reduce the advantages provided by the saddle flex913, and if the contact is made with a fastener, it would also provide a lot of force in a very small area which could cause saddle damage, fastener damage, unintentional pitch adjustment, location focused jarring to the rider, and the like. For example, if the saddle flex913is 15 mm, then the saddle bottom-to-dropper seatpost standoff distance920would have to be greater than 15 mm. If the fasteners that change the pitch can encroach on the saddle bottom-to-dropper seatpost standoff distance920, then the greatest possible fastener encroachments (e.g., at a minimum or maximum pitch) would have to be added to the saddle bottom-to-dropper seatpost standoff distance920to account for the fastener encroachment. In one embodiment, this minimum distance could require a reduction in the overall droop of the rail clamps, or the like, which would limit the attainable amount of dead length reduction. In one embodiment, to overcome any incursion into the saddle bottom-to-dropper seatpost standoff distance920thereby allowing the maximum “droop” in the lower saddle rail clamping portion420and the upper saddle rail clamping portion430, the fasteners815aand815bin the saddle clamp assembly400are inverted. In other words, the head of the fasteners817aand817bare in the upper saddle rail clamping portion430and additional mechanical components, e.g., fasteners815aand815b, are provided on the underside of the seatpost fasteners817aand817bto accommodate manipulation of the working length of the fastener. That is, the additional mechanical component provides the capability to adjust the pitch by adjusting the location of the additional mechanical components, e.g., fasteners815aand815bwith respect to the fasteners817aand817b. In one embodiment, by inverting fasteners817aand817b, the upper saddle rail clamping portion430of the saddle clamp assembly400will become the point on the dropper seatpost body closest to the saddle bottom912(as indicated by line A-A). This change in configuration allows the saddle bottom-to-dropper seatpost standoff distance920to be standardized as a measurement defined by the amount of droop in lower saddle rail clamping portion420and the upper saddle rail clamping portion430subtracted from the distance from the upper saddle rail clamping portion430to the saddle bottom912. Further, since the inverted fasteners817aand817bwill no longer (or minimally) extend from upper saddle rail clamping portion430of the saddle clamp assembly400, fasteners817aand817bwill not be able to make incursions into the saddle bottom-to-dropper seatpost standoff distance920. Moreover, since the pitch of the saddle will be made by adjusting the mechanical component fasteners815aand815bon the lower saddle rail clamping portion420of the saddle clamp assembly400, any adjustment to change in the dropper seatpost pitch (resulting in a saddle pitch adjustment) will not change the saddle bottom-to-dropper seatpost standoff distance920. Thus, the drooped lower saddle rail clamping portion420and the upper saddle rail clamping portion430with inverted fasteners817aand817bwill provide a maximum rail clamp droop thereby providing a maximum reduction in the dead length, while also providing a lower minimum saddle height; without detrimentally affecting the saddle flex913, the dropper total travel distance T, and/or any other performance characteristics. Thus, the embodiment will increase the shortest and tallest rider height range for any length dropper seatpost. In one embodiment, the drooped lower saddle rail clamping portion420and the upper saddle rail clamping portion430with inverted fasteners817aand817bwill maintain a uniform (non-variable) max saddle flex911standoff distance930between the saddle bottom912and the upper saddle rail clamping portion430regardless of any pitch angle of the dropper seatpost and/or saddle or any adjustment to any components that would cause the adjustment of the pitch angle. In one embodiment, the drooped lower saddle rail clamping portion420and upper saddle rail clamping portion430with inverted fasteners827aand827bwill maintain a small dropper seatpost standoff distance920between the saddle bottom912and the upper saddle rail clamping portion430during an extreme pitch angle of the dropper seatpost and/or saddle or any adjustment to any components that would cause the extreme pitch adjustment of the pitch angle. FIG.10Ais a side view of the saddle clamp assembly400having two downward adjustable fasteners with barrel nuts825aand825b, showing a head of fastener827bincursion a distance1011at a first pitch setting, in accordance with an embodiment. FIG.10Bis a side view of the saddle clamp assembly400having two downward adjustable fasteners827aand827bwith barrel nuts825aand825bin a second pitch setting. InFIG.10B, the head of fastener827bcauses an incursion of a distance1121while the head of fastener827acauses an incursion of a distance1120in accordance with an embodiment. In one embodiment, the barrel nuts825aand825bare used to make sure nothing sharp is sticking out from the bottom of the saddle clamp assembly400. The fasteners827aand827binclude hex broaches for Allen wrenches. By using the barrel nuts825aand825band hex broaches in the fasteners827aand827bthe adjustment to the saddle pitch will be performed by an Allen wrench in a similar manner that is done in many present pitch adjustments. In one embodiment, such as when the pitch is at a high angle, there is an opportunity for a slight fastener rise above the flat body of the upper saddle rail clamping portion430. However, this would be a minimal amount. In another embodiment, a hole (similar to small apertures466aand466bofFIG.8D) which one or both of the fasteners827aand827bare in will be deeper to ensure there is no portion of the fasteners827aand827bbreaking the upper saddle rail clamping portion430plane. In one embodiment, the hole which the fasteners827aand827bare in will include a slight raised lip to ensure there is no portion of fasteners827aand827bbreaking the upper saddle rail clamping portion430plane regardless of saddle pitch angle. The foregoing Description of Embodiments is not intended to be exhaustive or to limit the embodiments to the precise form described. Instead, example embodiments in this Description of Embodiments have been presented in order to enable persons of skill in the art to make and use embodiments of the described subject matter. Moreover, various embodiments have been described in various combinations. However, any two or more embodiments can be combined. Although some embodiments have been described in a language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed by way of illustration and as example forms of implementing the claims and their equivalents. | 28,487 |
11858577 | DETAILED DESCRIPTION FIG.1shows a motorcycle trim component10according to the invention, in particular a rear wheel cover. The motorcycle trim component is preferably a plastics injection molded component. A line duct12which is formed integrally with the motorcycle trim component10is arranged on the motorcycle trim component10. The line duct12is designed for receiving and holding a hose14, which is illustrated inFIG.2, in particular a brake line. As can be seen inFIG.1, the line duct12comprises a cross-sectionally U-shaped channel16, into which a hose14can be placed, and two holding clamps18. The holding clamps18are fastened to the rest of the motorcycle trim component10by means of a film hinge20and can be pivoted from an installation position, which is illustrated inFIG.1, into a holding position. In the holding position, the holding clamps18at least partially close the channel16in order to secure a hose14that is located in the channel16. In the embodiment illustrated, the holding clamps are arranged at a free longitudinal edge22of the channel16. This is advantageous in respect of being able to remove the motorcycle trim component10from the mold. In order to hold a hose14in the channel16along the entire length of the channel16, two holding clamps18which are adjacent to opposite longitudinal ends of the channel16are provided. In the case of a channel16having a particularly large longitudinal extent, at least one holding clamp18can be additionally provided in the center of the channel16. The holding clamp18has a substantially L-shaped profile. This shape of the holding clamp18serves for the holding clamp18to be able to be supported on an adjacent component by means of a transverse web24of the L-profile in a mounted position of the trim component10. This will be explained more precisely below in conjunction withFIG.2. In a mounted state, the transverse web24extends from the holding clamp18in a direction away from the channel16. FIG.2shows a motorcycle assembly26according to the invention having a motorcycle trim component10, in particular having the motorcycle trim component10according to the invention that is described in conjunction withFIG.1. In addition, the motorcycle assembly26comprises a hose14, in particular a brake line28, wherein the brake line28is accommodated in the line duct12. In more precise terms, the brake line28is arranged in the channel16of the motorcycle trim component10. Furthermore, at least one further vehicle component30is provided, in particular a body component of the motorcycle. The two holding clamps18on the motorcycle trim component10are arranged in a holding position and are held in the holding position by the at least one further vehicle component30. The transverse web24of the L-shaped holding clamp18is supported here on the edge of the directly adjacent vehicle component30. | 2,865 |
11858578 | DETAILED DESCRIPTION The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. Apparently, the embodiments described are merely a part of rather than all of the embodiments of the present disclosure. The following description of at least one exemplary embodiment is in fact merely illustrative and is in no way intended as a limitation to the present disclosure and its application or use therewith. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts fall within the protection scope of the present disclosure. In order to solve the problem in the related art that open and close operations of the foot-deck device are complicated, the present disclosure provides a foot-deck device100and a vehicle1000. As illustrated inFIGS.1to8, the foot-deck device100includes a bracket10, two deck assemblies20, and a linkage mechanism30. The two deck assemblies20are oppositely disposed on the bracket10. The two deck assemblies20each include a deck21and a rotating shaft22. The rotating shafts22are pivotally disposed on the bracket10, and the decks21are disposed on the respective rotating shafts22. The rotating shaft22of one deck assembly20is coupled to the rotating shaft22of the other deck assembly20through the linkage mechanism30. The linkage mechanism30is used to cause the rotating shafts22of the two deck assemblies20to rotate synchronously and cause the decks21of the two deck assemblies20to rotate synchronously to respective folded positions or unfolded positions. In the present disclosure, the structure of the foot-deck device100is optimized, so that the two rotating shafts22of the two deck assemblies20are coupled through the linkage mechanism30, and the two decks21rotate synchronously to folded positions or unfolded positions. Each time the vehicle1000is used or stored, only one open operation or close operation is required, and there are advantages of simple, time-saving, and labor-saving operations. In some embodiments, under the transmission of the linkage mechanism30, the rotating shaft22of one deck assembly20and the rotating shaft22of the other deck assembly20have opposite rotation directions, so that the decks21of the two deck assemblies20rotate towards each other to the folded positions or rotate away from each other to the unfolded positions. In some embodiments, the foot-deck device100further includes an electric drive mechanism110. An output end of the electric drive mechanism110is coupled to any one of the rotating shafts22of the two deck assemblies20, so that the two decks21are driven to rotate synchronously through the electric drive mechanism110. In this way, the electric drive mechanism110is used to control any one of the rotating shafts22of the two deck assemblies20to rotate, and the rotating shaft22may drive the other rotating shaft22to rotate under the transmission of the linkage mechanism30, so that the two rotating shafts22drive the two decks21to rotate synchronously to the folded positions or unfolded positions, which improves the degree of automation of the foot-deck device100, reduces the labor intensity of a user, makes the operation simpler, and only needs to control the electric drive mechanism110to start. In some embodiments, the foot-deck device100further includes an electric drive mechanism110. An output end of the electric drive mechanism110is coupled to the linkage mechanism30, so that the decks21of the two deck assemblies20are driven to rotate synchronously through the electric drive mechanism110. In this way, the electric drive mechanism110is used to drive the rotating shafts22of the two deck assemblies20to rotate synchronously through the linkage mechanism30, so that the two decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions, which improves the degree of automation of the foot-deck device100, reduces the labor intensity of the user, makes the operation simpler, and only needs to control the electric drive mechanism110to start. In some embodiments, the foot-deck device100further includes a manual trigger mechanism120. The manual trigger mechanism120is coupled to any one of the rotating shafts22of the two deck assemblies20, and the manual trigger mechanism120has a manual trigger end122, so that the two decks21are driven to rotate synchronously through the manual trigger end122under an external force. In this way, the user may drive any one of the rotating shafts22of the two deck assemblies20to rotate through the manual trigger mechanism120, and the rotating shaft22may drive the other rotating shaft22to rotate under the transmission of the linkage mechanism30, so that the two rotating shafts22drive the two decks21to rotate synchronously to the folded positions or unfolded positions, which makes the operation simpler and only requires the user to apply an external force to a manual trigger end122. In some embodiments, the manual trigger mechanism120is a key or a paddle or a push rod. In some embodiments, the foot-deck device100further includes a manual trigger mechanism120. The manual trigger mechanism120is coupled to the linkage mechanism30, and the manual trigger mechanism120has a manual trigger end122, so that the decks21of the two deck assemblies20are driven to rotate synchronously through the manual trigger end122under an external force. In this way, the user may apply an external force to the manual trigger end122of the manual trigger mechanism120, so that the manual trigger mechanism120drives the rotating shafts22of the two deck assemblies20to rotate synchronously through the linkage mechanism30, and the two decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions, which makes the operation simpler and only requires the user to apply an external force to a manual trigger end122. In some embodiments, one of the decks21of the two deck assemblies20serves as the manual trigger end122, so that the decks21of the two deck assemblies20are driven to rotate synchronously through the manual trigger end122under an external force. In this way, the structure of the foot-deck device100is further simplified. One of the decks21of the two deck assemblies20serves as the manual trigger end122, the user applies an external force to the deck21to make it rotate relative to the bracket10, the deck21drives its corresponding rotating shaft22to rotate, the rotating shaft22drives the rotating shaft22of the other deck assembly20to rotate under the transmission of the linkage mechanism30, and the rotating shaft22of the other deck assembly20drives its corresponding deck21to rotate, so that the two decks21of the two deck assemblies20both rotate to the folded positions or unfolded positions, which makes the operation simpler and only requires the user to apply an external force to a manual trigger end122. In order to achieve a linkage effect, the linkage mechanism30includes a driving component170and a driven component180. One driving component170and one driven component180are provided, the driven component180is coupled to both of the two deck assemblies20, and the driving component170is drivingly coupled to the driven component180; or one driving component170and two driven components180are provided, one driven component180is coupled to one deck assembly20, the other driven component180is coupled to the other deck assembly20, and the driving component170is drivingly coupled to the two driven components180; or two driving components170and two driven components180are provided, the two driving components170and the two driven components180are drivingly coupled in a one-to-one correspondence, one driven component180is coupled to one deck assembly20, the other driven component180is coupled to the other deck assembly20, and the two driving components170are drivingly coupled to move synchronously. In order to achieve synchronous movement of the two deck assemblies20, in one implementation, the linkage mechanism30includes: a lifting assembly190having a lifting portion192movable in a predetermined direction; and two coupling rods32. The two coupling rods32are both coupled to the lifting portion192, one coupling rod32is drivingly coupled to one deck assembly20, and the other coupling rod32is drivingly coupled to the other deck assembly20. In order to achieve synchronous movement of the two deck assemblies20, in another implementation, the linkage mechanism30includes: two driving wheels200drivingly coupled to move synchronously; and two driven wheel sets210. One driving wheel200and the rotating shaft22of one deck assembly20are fitted with a first synchronous belt220, and the other driving wheel200and the rotating shaft22of the other deck assembly20are fitted with a second synchronous belt230. The two driven wheel sets210are pressed on the first synchronous belt220and the second synchronous belt230in a one-to-one correspondence. The present disclosure provides a number of specific embodiments according to different linkage mechanisms30, which are described in detail below. Embodiment 1 As illustrated inFIGS.1to4, the linkage mechanism30includes a first linkage assembly130used to be coupled to one deck assembly20, a second linkage assembly140used to be coupled to the other deck assembly20, and a coupling member33. The first linkage assembly130and the second linkage assembly140each include a crank31and a coupling rod32. First ends of the cranks31are coupled to the respective decks21, and first ends of the coupling rods32are hinged to second ends of the respective cranks31. The coupling member33is hinged to both a second end of the coupling rod32of the first linkage assembly130and a second end of the coupling rod32of the second linkage assembly140, and the coupling member33is movably disposed in a vertical direction, so that the rotating shafts22of the two deck assemblies20are driven to rotate synchronously through the coupling member33. In this way, when the coupling member33moves in the vertical direction under an external force, the coupling member33drives the two rotating shafts22to rotate synchronously through the two coupling rods32and the two cranks31, and the two rotating shafts22drive the two decks21to rotate synchronously to the folded positions or unfolded positions. As illustrated inFIGS.1to4, the foot-deck device100includes a rotary motor40and a lead screw50, the coupling member33includes a nut150fitting with the lead screw, the rotary motor40is disposed on the bracket10, the lead screw50is coupled to an output shaft of the rotary motor40, and the nut150is fitted over the lead screw50. The coupling member33is hinged to the second end of the coupling rod32of the first linkage assembly130and the second end of the coupling rod32of the second linkage assembly140through the nut150, or the nut150is provided with a coupling slider160, and the coupling member33is hinged to the second end of the coupling rod32of the first linkage assembly130and the second end of the coupling rod32of the second linkage assembly140through the coupling slider160. In this way, the rotary motor40is controlled to start, the output shaft of the rotary motor40drives the lead screw50to rotate, the lead screw50drives the nut150to move in the vertical direction, the nut150drives the two rotating shafts22to rotate synchronously through the two coupling rods32and the two cranks31, and the two rotating shafts22drive the two decks21to rotate synchronously to the folded positions or unfolded positions, which improves the degree of automation of the foot-deck device100, reduces the labor intensity of the user, makes the operation simpler, and only needs to control the rotary motor40to start. As illustrated inFIG.1, when the nut150moves downwards, left and right decks are closed until the two decks21move to the folded positions. As illustrated inFIG.2, when the nut150moves upwards, the left and right decks are unfolded until the two decks21move to the unfolded positions. The upward and downward movement of the nut150is controlled by forward and reverse rotation of the rotary motor40. In some embodiments, the bracket10may be a single part or an assembly formed by fixed couplings and combinations of a plurality of parts. In an optional embodiment illustrated inFIG.4, the bracket10is an assembly. The bracket10includes a column11, a support base12, and a fixed plate13. The support base12is mounted on the column11, for fixing the lead screw50. The fixed plate13is mounted on the column11, for fixing the rotary motor40. In the optional embodiment illustrated inFIG.4, the foot-deck device100includes a mounting member60and a hinge pin70. The mounting member60is coupled to the nut150. The coupling rod32is coupled to the mounting member60. Second ends of the two coupling rods32are coupled to the nut150through the mounting member60. Embodiment 2 As illustrated inFIGS.5to8, the linkage mechanism30includes a first linkage assembly130used to be coupled to one deck assembly20and a second linkage assembly140used to be coupled to the other deck assembly20. The first linkage assembly130and the second linkage assembly140each include a first pulley34, a second pulley35, a synchronous belt36, and a first gear37. The first pulleys34are disposed on the respective rotating shafts22, the second pulleys35are spaced apart from the respective first pulleys34, the synchronous belts36are fitted over the respective first and second pulleys34,35, and the first gears37are coupled to the respective second pulleys35. The first gear37of the first linkage assembly130engages with the first gear37of the second linkage assembly140. In this way, when the first gear37of the first linkage assembly130rotates under an external force, the first gear37drives the first gear37of the second linkage assembly140to rotate, the first gears37drive the respective second pulleys35to rotate, the second pulleys35drive the first pulleys34to rotate under the transmission of the respective synchronous belts36, the first pulleys34drive the respective rotating shafts22to rotate, and the rotating shafts22drive the respective decks21to rotate, so that the two decks21rotate synchronously to the folded positions or unfolded positions. In some embodiments, when the deck21or the rotating shaft22of one deck assembly20rotates under an external force, the rotating shaft22of the other deck assembly20may also be driven to rotate under the transmission of the linkage mechanism30, so that the two decks21rotate synchronously to the folded positions or unfolded positions. As illustrated inFIGS.5to8, the foot-deck device100includes a rotary motor40. An output end of the rotary motor40is coupled to the first gear37of the first linkage assembly130or the first gear37of the second linkage assembly140. In this way, the rotary motor40is controlled to start, and the rotary motor40drives the first gear37of the first linkage assembly130or the first gear37of the second linkage assembly140to rotate, so that the decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions, which improves the degree of automation of the foot-deck device100, reduces the labor intensity of the user, makes the operation simpler, and only needs to control the rotary motor40to start. As illustrated inFIGS.5to8, the first linkage assembly130and the second linkage assembly140each include a reversing pulley38and a tension pulley39. The reversing pulleys38are disposed between the respective first and second pulleys34,35, the first pulleys34and the reversing pulleys38are spaced apart in a first direction, the second pulleys35and the reversing pulleys38are spaced apart in a second direction, the synchronous belts36are fitted over the first pulleys34, the reversing pulleys38, and the second pulleys35, and the tension pulleys39are spaced apart from the reversing pulleys38and are located on outer sides of the synchronous belts36, so that the synchronous belts36each include a first transmission portion extending in the first direction and a second transmission portion extending in the second direction. In this way, positions of the first gears37can be flexibly set by providing the tension pulleys39and the reversing pulleys38. At the same time, the mounting of the rotary motor40can be facilitated. Embodiment 3 The present disclosure further provides unillustrated Embodiment 3. Embodiment 3 is different from Embodiment 1 in that the foot-deck device100includes a linear motor and a guide rail, the coupling member33includes a guide slider fitting with the guide rail, the linear motor is disposed on the bracket10, the guide rail is disposed on the bracket10, the guide slider is slidably coupled to the guide rail, and the guide slider is coupled to an output end of the linear motor. In this way, the linear motor is controlled to start, the linear motor drives the slider to slide relative to the guide rail, the guide slider drives the two rotating shafts22to rotate synchronously through the two coupling rods32and the two cranks31, and the two rotating shafts22drive the two decks21to rotate synchronously to the folded positions or unfolded positions, which improves the degree of automation of the foot-deck device100, reduces the labor intensity of the user, makes the operation simpler, and only needs to control the linear motor to start. Embodiment 4 The present disclosure further provides unillustrated Embodiment 4. Embodiment 4 is different from Embodiment 2 in that the linkage mechanism30includes a first linkage assembly130used to be coupled to one deck assembly20and a second linkage assembly140used to be coupled to the other deck assembly20, the first linkage assembly130and the second linkage assembly140each including a first sprocket, a second sprocket, a synchronous chain, and a first gear37. The first sprockets are disposed on the respective rotating shafts22, the second sprockets are spaced apart from the respective first sprockets, the synchronous chains are fitted over the respective first and second sprockets, and the first gears37are coupled to the respective second sprockets; and the first gear37of the first linkage assembly130engages with the first gear37of the second linkage assembly140. In Embodiment 4, the sprockets are used to replace the pulleys. Other transmission manners are the same, and are not described in detail here. Embodiment 5 The present disclosure further provides unillustrated Embodiment 5. Embodiment 5 is different from Embodiment 2 in that the linkage mechanism30includes a first linkage assembly130used to be coupled to one deck assembly20and a second linkage assembly140used to be coupled to the other deck assembly20, the first linkage assembly130and the second linkage assembly140each include a first gear37, and the first gears37are disposed on the respective rotating shafts22; and the first gear37of the first linkage assembly130engages with the first gear37of the second linkage assembly140. In this way, the structure of the linkage mechanism30is further simplified. When the first gear37of the first linkage assembly130rotates under an external force, it drives the first gear37of the second linkage assembly140to rotate, and at the same time, the first gears37drive the respective rotating shafts22to rotate, so that the decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions. Embodiment 6 The present disclosure further provides unillustrated Embodiment 6. Embodiment 6 is different from Embodiment 5 in that the first linkage assembly130and the second linkage assembly140each include a plurality of transmission gears engaging with each other in a preset transmission direction, transmission gears in the plurality of transmission gears located at transmission head ends are the first gears37, the first gears37are fitted over the respective rotating shafts22, and one transmission gear of the first linkage assembly130located at a transmission tail end engages with one transmission gear of the second linkage assembly140located at a transmission tail end. In this way, when the two rotating shafts22are at a larger interval, in order to avoid a large size of the first gears37, a plurality of second gears engaging with each other are added between the two first gears37, so that the decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions through the plurality of transmission gears. The transmission gears include a plurality of second gears engaging with each other and two first gears37. An even number of second gears are provided, so as to ensure that the two rotating shafts22rotate in opposite directions. Embodiment 7 The present disclosure further provides unillustrated Embodiment 7. Embodiment 7 is different from Embodiment 1 in that the linkage mechanism30includes a first stay wire and a second stay wire, and first ends of the first stay wire and the second stay wire are coupled to the rotating shafts22of the two deck assemblies20respectively, so that the rotating shafts22of the two deck assemblies20are driven to rotate synchronously through second ends of the first stay wire and the second stay wire. In this way, the second ends of the first stay wire and the second stay wire drive the rotating shafts22of the two deck assemblies20under an external force, so that the decks21of the two deck assemblies20rotate synchronously to the folded positions or unfolded positions through the first stay wire and the second stay wire. It needs to be noted that the linkage mechanism is used to establish a linkage relationship between the left deck and the right deck. The linkage mechanism30is not limited to the specific embodiments provided in the present disclosure, provided that the decks21of the two deck assemblies20can rotate synchronously to the folded positions or unfolded positions. In some embodiments, the foot-deck device100provided in the present disclosure is used to place a user's feet while the user is driving a vehicle1000. In some embodiments, the foot-deck device100provided in the present disclosure is used for a user to stand on a vehicle1000through the foot-deck device100while the user is driving the vehicle1000. In some embodiments, the bracket10may be a separate component or a frame of the vehicle1000. As illustrated inFIG.9, the present disclosure further provides a vehicle1000. The vehicle1000includes a foot-deck device100. The foot-deck device100is the foot-deck device100described above. The foot-deck device100of the vehicle1000provided in the present disclosure can be switched between an unfolded position and a folded position, so as to facilitate storage and transportation of the vehicle1000. At the same time, the two decks21of the foot-deck device100can be folded or unfolded with linkage to simplify the operation. In some embodiments, when a motor is added to the foot-deck device100provided in the present disclosure, open and close actions of the decks on both sides can be controlled by the motor, which can realize automatic folding of the decks on both sides with linkage, further simplifying the operation and making the product more intelligent and automatic. In some embodiments, the vehicle1000is a bicycle or an electric bicycle. In some embodiments, the vehicle1000is an electric scooter, an electric unicycle, or an electric motorcycle. It needs to be noted that the terms used here are intended only to describe specific implementations, but are not intended to limit exemplary implementations of the present disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, it should be further understood that the terms “include” and/or “comprise,” when used in the specification, specify the presence of features, steps, operations, devices, components, and/or their combinations. Unless otherwise specified, relative arrangements of components and steps, numerical expressions, and values described in these embodiments do not limit the scope of the present disclosure. Meanwhile, it should be understood that, in order to facilitate the description, sizes of respective parts illustrated in the drawings are not drawn according to an actual proportional relationship. Technologies, methods, and devices known by those of ordinary skill in the art may not be discussed in detail, but in appropriate situations, the technologies, methods, and devices should be regarded as part of the specification. In all the examples illustrated and discussed herein, any specific value should be construed as merely illustrative and not as a limitation. Thus, other examples of exemplary embodiments may have different values. It is to be noted that, similar reference numerals and letters denote similar items in the following drawings, and therefore, once an item is defined in one drawing, there is no need for further discussion in the subsequent drawings. In the description of the present disclosure, it will be appreciated that locative or positional relations indicated by “front, back, up, down, left, and right”, “lateral, vertical, perpendicular, and horizontal”, “top and bottom” and other terms are locative or positional relations shown on the basis of the drawings, which are intended only to make it convenient to describe the present disclosure and to simplify the description. In the absence of contrary description, the orientation terms do not indicate or imply that the referred device or element must have a specific location and must be constructed and operated with the specific location, and accordingly it cannot be understood as limitations to the present disclosure. The orientation terms “inner and outer” refer to inner and outer contours of each component. For ease of description, spatial relative terms such as “over”, “above”, “on an upper surface” and “upper” may be used herein for describing a spatial position relation between a device or feature and other devices or features shown in the drawings. It will be appreciated that the spatial relative terms are intended to contain different orientations in usage or operation other than the orientations of the devices described in the drawings. For example, if a device in the drawings is inverted, devices described as “above other devices or structures” or “over other devices or structures” will be located as “below other devices or structures” or “under other devices or structures”. Thus, an exemplary term “above” may include two orientations, namely “above” and “below”. The device may be located in other different manners (rotated by 90 degrees or located in other orientations), and spatial relative descriptions used herein are correspondingly explained. It needs to be noted that, terms such as “first” and “second” in the specification, claims, and the drawings of the present disclosure are only used to distinguish similar objects, and are not used to describe specific sequence or order. It is to be understood that data used in this manner may be interchangeable where appropriate, so that the implementations of the present disclosure described herein may be realized in sequences excluding those illustrated or described herein. The above are merely preferred embodiments of the present disclosure, and are not used to limit the present disclosure. For those skilled in the art, the present disclosure may have various alterations and changes. Any alteration, equivalent replacement, improvement, and the like made within the spirit and principle of the present disclosure all fall within the protection scope of the present disclosure. | 27,923 |
11858579 | DETAILED DESCRIPTION The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure. Although the invention has been described with a preferred embodiment, it should be noted that the inventor can make various modifications, additions and alterations to the invention without departing from the original scope as described in the present disclosure. While the applicant's teachings described herein are in conjunction with various embodiments for illustrative purposes, it is not intended that the applicant's teachings be limited to such embodiments. On the contrary, the applicant's teachings described and illustrated herein encompass various alternatives, modifications, and equivalents, without departing from the embodiments, the general scope of which is defined in the appended claims. Except to the extent necessary or inherent in the processes themselves, no particular order to steps or stages of methods or processes described in this disclosure is intended or implied. In many cases the order of process steps may be varied without changing the purpose, effect, or import of the methods described. The present invention is an improvement on the prior art, as the present invention not only allows the rider to view rear oncoming objects from multiple rider positions on the bike without having to make mirror adjustments, but can also be quickly and securely attached or removed from an existing protective helmet without potentially damaging the helmet or needing to reapply any further adhesives, which is only applied once to the mounting base in the case of the proposed invention. Referring toFIG.1, there is shown a perspective view of a preferred embodiment of a rider1,2on a standard two-wheel bicycle3with the rider being in a low aerodynamic profile1and utilizing the top mirror4to view rear approaching vehicles in the visual range6. With a rider in the high standing profile2and utilizing the bottom mirror5, the rider is able to view rear approaching vehicles8in the visual range7. Referring toFIG.2, there is shown a perspective view of a rider's helmet11of the embodiment provided inFIG.1. A male adaptor side helmet mounting bracket13may be attached with a commercial grade double sided adhesive tape12. The mounting bracket13contains the outer gear section14for mating to outer extension arm rotational gear bracket section17, inner ring cavity16surrounds center mounting section15. Outer extension arm bracket17snaps onto male side helmet mounting bracket13by squeezing plastic ring portion18which has a natural tendency to return to its original shape once the rider releases squeeze pressure. The bracket section17can be rotated through a range of 45 degrees when connected onto male side helmet mounting bracket13. The bracket section17, using an interlocking gear tooth arrangement, provides the rider with virtually no outward movement or vibration of the attached plurality of mirrors4,5, thus providing a more stable rear viewing compared to standard bicycle mirror configurations. Additionally, this novel adaptation allows the rider1,2to quickly change the vertical angle aspect of the mirror extension arm19which can be beneficial to riders of different heights. This also provides the ability to quickly detach the mirror assembly as so to avoid potential damage to the mirror assembly when the helmet11is not being worn by the rider. The helmet mounting bracket13is mounted in a near parallel position on the side of the helmet1. As some helmet designs might not provide a sufficiently flat mounting surface, it may be required to utilize a wedge29secured using double sided tape and placed between the helmet mounting bracket13and against the helmet11to bring the helmet mounting bracket13into its proper position. Additionally, with helmets having numerous venting holes, it may be advantageous to also utilize the helmet mounting bracket's13built in eye lets14A, which can accommodate tie wrap straps to help ensure secure mounting to the helmet11. Extension arm19is threaded, allowing it to provide adjustable length within the entire bracket assembly28and is tightened into place by locking screw20. Distal end of extension arm19contains the mirror assembly arm26for the upper mirror reflective surface23A and, the mirror assembly arm27for the lower mirror reflective surface23B which are affixed to extension arm19with screw21. Ball joints22A and22B allow rotational movement of each mirror body25A and25B. Screws24A and24B tighten their respective mirror body25A and25B into place once the rider has found the best viewing positions. The opposite side of each mirror body25A and25B contains a light reflective surface that will allow oncoming vehicle drivers to better see the bicycle rider. Referring now toFIG.3, there is shown a perspective view of the outer extension arm rotational gear bracket section17of the opposite side toFIG.2. Specifically,31represents the opposite side with the plastic squeeze ring32moving inwards as external pressure37is gently applied typically with the rider's thumb and fore finger against the opposing sides of the squeeze ring. Once positioned, the rider will release said squeeze pressure37, thus setting the bracket28in place on the helmet11. The mating gear section33mates against the gear teeth14. The internal bracket guide34moves within the inner ring cavity of16. The extension arm19and locking screw20are also shown, in reverse view as provided inFIG.2. The two adjustable mirror bodies25A,25B are mounted on the mirror assembly arms26,27and are specifically adjusted in angle and length for the rider. The assembly arms26,27are further attached via, for example, a ratchet connection to extension arm19thus providing an additional rotational adjustment. The multiple angle, length and rotational adjustments together permit the rider to set the mirror bodies25A,25B in a configuration to view rear traffic in a variety of positions while on the bike, thus avoiding dangerous adjustment of mirrors while riding. For example, sitting in an upright seated position or even fully standing in the pedals as when climbing a hill, the rider would view rear approaching vehicles in mirror surface23B mounted on mounting arm27, while when seated in a low aerodynamic position using traditional bull horn handlebars or aero bars, the rider would view rear approaching vehicles in mirror surface23A mounted on mounting arm26. It is important to note that using traditional bike mirrors, the aforementioned viewing ranges are not possible to achieve unless the rider attempts to make adjustments to the mirror while riding, which is a very dangerous thing to do while riding. The helmet mirror assembly28is designed to simply snap in place onto a previously secured mounting base13, allowing for quick removal of the mirror assembly and thus preventing accidental damage when not in use. Once in place, the helmet mirror assembly28is secure on its mounting base on the helmet11, such that there is negligible vibration carried through to the mirror assembly. This is in contrast with many commercially available bicycle mirrors that simply use a Velcro connection, which results in vibration issues at the mount and thus negatively affect a traditional mirror's viewing surface. The present invention has been shown and described in a preferred embodiment. It is recognized, however, that departures may be made within the scope of the invention and that obvious modifications will occur to a person skilled in the art. With respect to the above description, it is to be realized that the optimum dimensional relationships for the parts of the presented invention, to include variations in size, materials, shape, form, function, and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specifications are intended to be encompassed by the present invention. Therefore, the foregoing is considered as illustrative only of the principles of the present invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the present invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents result to falling within the scope of the invention. | 8,784 |
11858580 | DETAILED DESCRIPTION Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. FIG.1illustrates an electric vehicle in the form of a motorcycle10. The motorcycle10includes a frame14, a swing arm18pivotally coupled to a rear portion of the frame14, and a front fork22rotatably coupled to a front portion of the frame14at a steering head26. A rear wheel30is coupled to the swing arm18, and a front wheel34is coupled to the front fork22. The rear wheel30and the front wheel34support the frame14for movement along the ground. The motorcycle10defines a longitudinal axis36that extends centrally through the motorcycle10along the length of the motorcycle10. In other words, the longitudinal axis36extends within a longitudinal mid-plane that bisects the motorcycle10along its length. A straddle seat38overlies at least a portion of the frame14for supporting at least one rider, and the motorcycle includes handlebars42coupled to the front fork22via the steering head26for steering the front wheel34. Various controls and indicators for operating the motorcycle10may be located on the handlebars42. The motorcycle10further includes a drive assembly46coupled to the rear wheel30to provide torque to the rear wheel30and thereby propel the motorcycle10. A battery assembly50is electrically coupled to the drive assembly46for powering the drive assembly46. Although the drive assembly46and battery assembly50are described herein in the context of the motorcycle10, it should be understood that the drive assembly46and the battery assembly59could be used on other electric vehicles, such as automobiles, all-terrain vehicles, and the like. Referring toFIG.4, the illustrated battery assembly50includes a battery housing (or simply a “housing”)54with an upper portion58and a lower portion62, each containing an array of rechargeable battery cells (e.g., lithium-based cells; not shown) that store and supply electrical power (i.e. voltage and current). The upper portion58and the lower portion62are coupled together by mechanical fasteners with a gap66between the two portions58,62. The gap66may allow air to flow between the upper and lower portions58,62to cool the battery assembly50. In other embodiments, the housing54may be formed as a single piece, without distinct upper and lower portions. The housing54has a top side70A (on the upper portion58), a bottom side70B (on the lower portion62) opposite the top side70A, and first and second opposite lateral sides70C,70D extending between the top and bottom sides70A,70B. Rear and front sides70E,70F (defined with reference to a forward travel direction of the motorcycle10) extend between the top and bottom sides70A,70B. The drive assembly46is coupled to the battery housing54and positioned below the bottom side70B of the battery housing54. With reference toFIG.3, the drive assembly46includes a motor74and a gear assembly78that transmits torque from an output shaft82of the motor74to a belt86that is coupled to the rear wheel30. In the illustrated embodiment, the motor74is an AC induction motor, and the drive assembly46further includes an inverter90that converts DC power from the battery assembly50to AC power to be supplied to the motor74. The inverter90includes a circuit board94that connects switching electronics98(e.g., IGBTs, MOSFETS, or the like) in an inverter circuit. The circuit board94may also include other electronic components that control operation of the motor74. In other embodiments, the motor74may be a DC motor, such that the inverter90may be omitted. The drive assembly46is housed within a drive housing unit102that includes a gear housing106, a motor housing110, and an inverter housing114, which are each aligned in series along a longitudinal axis118of the drive housing unit102. The longitudinal axis118may be parallel to and/or coaxial with a rotational axis of the output shaft82. The longitudinal axis118is also parallel to the longitudinal axis36of the motorcycle10(FIG.1). In the illustrated embodiment, the drive housing unit102is positioned such that the longitudinal axis118is centered along the width W of the motorcycle10(FIG.2). The drive housing unit102includes a front end122and a rear end126, defined with respect to a forward travel direction of the motorcycle10(i.e. with reference toFIG.1, along the longitudinal axis36and to the right). The inverter housing114defines the front end122, and the gear housing106defines the rear end126(FIG.3). Thus, the inverter housing114is disposed in front of the motor housing110along the longitudinal axis118, and the gear housing106is disposed behind the motor housing110along the longitudinal axis118. The gear housing106at least partially encloses the gear assembly78, the motor housing110at least partially encloses the motor74, and the inverter housing114at least partially encloses the inverter90. In the illustrated embodiment, the gear housing106, the motor housing110, and the inverter housing114are formed as separate pieces and coupled together (e.g., via a plurality of mechanical fasteners, welding, threaded connections or any other suitable means), which may facilitate assembly of the drive assembly46. In other embodiments, two or more of the gear housing106, the motor housing110, or the inverter housing114may be integrally formed together as a single piece. The gear housing106and the inverter housing114are coupled to opposite sides of the motor housing110such that the gear assembly78and the inverter90are positioned on opposite sides of the motor74. With continued reference toFIG.3, the illustrated gear assembly78includes a beveled pinion130coupled to an end of the output shaft82and a beveled drive gear134meshed with the pinion130. The drive gear134is supported on a drive shaft138for rotation about a drive axis142that is perpendicular to the longitudinal axis118and the output shaft82of the motor78. In the illustrated embodiment, the drive axis142is positioned between the front and rear ends122,126of the drive housing unit102. The drive gear134has a greater number of teeth than the pinion130such that the drive shaft138rotates at a slower speed than the output shaft82of the motor74. A sprocket146is coupled to an end of the drive shaft138. The sprocket146drives the belt86(e.g., a toothed belt), which extends between the sprocket146and a driven sprocket (not shown) coupled to the rear wheel30. In other embodiments, other types of belts may be used, or the belt86may be replaced with a chain. Alternatively, the drive shaft138may be directly coupled to the rear wheel30, or coupled to the rear wheel30via any other suitable torque transfer arrangement. Referring toFIG.4, the motorcycle further includes an onboard charger150to facilitate charging the battery cells of the battery assembly50and a cooling assembly154that removes heat from the charger150and the drive assembly46. In the illustrated embodiment, the cooling assembly154includes a plurality of coolant lines158that fluidly couple the charger150, the drive assembly46, and a radiator162into a single cooling loop. The radiator162is coupled to the front side70F of the battery housing54. A coolant pump166is directly coupled to the front end122of the drive housing unit102(and thus, to the inverter housing114). In other words, the coolant pump166is supported on the motorcycle10by the inverter housing114. The coolant pump166is operable to circulate coolant (e.g., a liquid coolant such as a glycol) through the cooling assembly154. The coolant pump166is enclosed by a cover170that is coupled to the front end122of the drive housing unit102(FIG.2). The cover170provides protection for the coolant pump166and preferably defines an aerodynamic outer shape. In the illustrated embodiment, the lateral sides of the cover170are substantially flush with the sides of the inverter housing114. As such, the cover170and the drive housing unit102define a single, cohesive assembly underneath the battery housing54. Various features of the invention are set forth in the following claims. | 8,506 |
11858581 | DETAILED DESCRIPTION OF EMBODIMENTS Selected embodiments will now be explained with reference to the drawings. It will be apparent to those skilled in the bicycle field from this disclosure that the following descriptions of the embodiments are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. Referring initially toFIGS.1to3, a bicycle10is illustrated that is equipped with a bicycle crank assembly12having a bicycle electric device14. As shown inFIG.1, the bicycle10illustrated is a road style bicycle having various electrically-controlled components. Of course, it will be apparent to those skilled in the art from this disclosure that the bicycle crank assembly12and/or the bicycle electric device14can be implemented with other types of bicycles as needed and/or desired. The bicycle electric device14is provided to the bicycle crank assembly12, and is configured to aid in determining a crank16angle of the bicycle crank assembly12as discussed below. The bicycle crank assembly12is rotatably mounted to a bicycle frame F in a conventional manner. The bicycle crank assembly12includes a bicycle crank16that is provided to a bicycle frame F of the bicycle10. As best seen inFIGS.1and2, the bicycle10is further provided with a detecting device18for detecting a condition of the bicycle crank assembly12. In the illustrated embodiment, the condition detected by the detecting device18includes an inclination state or an inclination angle of the bicycle crank assembly12that is installed on the bicycle10, as will be further discussed below. Further, the detecting device18of the illustrated embodiment is an electronic detecting device18having an electronic controller ECU that is programmable with one or more processors for executing electronic operations, as seen inFIG.2. In particular, the detecting device18of the illustrated embodiment includes a camera20configured to manually or automatically capture an image of the bicycle with the bicycle components installed thereon. The camera20is also configured to capture an image of the nearby surrounding area A of the bicycle10. The detecting device18of the illustrated embodiment further includes an inclinometer22configured to determine the condition of the bicycle crank16based on information captured by the camera20. Additionally, the detecting device18of the illustrated embodiment further includes a first storage24, as will be further discussed below. In the illustrated embodiment, the bicycle10is provided with a detecting system26for the bicycle crank assembly12. The detecting system26comprises the detecting device18that at least has the electronic controller ECU configured to obtain information relating to an image of the crank16. That is, the electronic controller ECU is capable of processing the images captured by the camera20of the detecting device18, as will be further described below. The electronic controller ECU is configured to determine the angle of the crank16(e.g., the condition of the crank16or the inclination angle) based on the information. The detecting system26can further include the bicycle crank assembly12that is provided to the bicycle10. Referring toFIGS.2and3, the bicycle crank assembly12comprises, among other components, a first or right crank arm16A, a second or left crank arm16B and a crankshaft16C. As seen inFIGS.2and3, the first and second crank16A and16B arms and are rigidly connected by the crankshaft16C. The crankshaft16C is preferably made a hollow shaft. A bicycle pedal P is rotatably attached to each of the crank arms16A and16B. The first crank16includes a pair of bicycle sprockets S1and S2. When a rider applies a force on the bicycle pedals P during riding, a pedaling force or a pedaling torque is transmitted to the first and second crank arms16A and16B. The first and second crank arms16A and16B rotate the bicycle sprockets S1and S2to move a bicycle chain BC and propel the bicycle10in a conventional manner. In the illustrated embodiment, the “bicycle crank16” will refer to the bicycle10having either or both of the first and second crank arms16A and16B. For simplicity, the first and second crank arms16A and16B will simply be referred to as the “bicycle crank16” in this disclosure. As seen inFIG.2, the bicycle crank assembly12can be equipped with a plurality of strain sensors28that are provided to the bicycle crank16. The strain sensors28can be disposed and utilized in a similar manner as taught in U.S. Patent Application Publication No. 2014/0060212 which also teaches various configurations of strain sensors28mounted to a crank. Alternatively, the strain sensors28can be disposed on the crankshaft16C. For example, U.S. Patent Application Publication No. 2015/0120119 discloses mounting a strain sensor or torque sensor onto a crankshaft. As another alternative, the strain sensors28can be disposed on the bicycle pedal P that is provided with the bicycle crank assembly12. For example, U.S. Patent Application Publication No. 2016/0052583 discloses various configurations of strain sensors that are disposed on a pedal spindle. In the illustrated embodiment, the strain sensors28are connected to corresponding the sensor circuits30that are configured to interpret the strain signal(s) to generate pedaling force information that is transmitted to the cycle computer CC via the wireless communication device. The operation of the strain sensors28and the sensor circuits30can be similar to that described in U.S. Pat. No. 10,475,303 and will not be further described herein. Referring toFIGS.2to4, the detecting system26of the illustrated embodiment preferably further comprises the bicycle electric device14that is provided to the bicycle crank assembly12. The electric device14includes a housing unit32that is detachably mounted to the crank16. Alternatively, the housing unit32can be fixedly mounted to the crank16. In the illustrated embodiment, the electric device14is disposed on a sprocket mounting portion of the crank16. It will be apparent to those skilled in the art from this disclosure that the bicycle electric device14can be located on various locations of the crank16as needed and/or desired. As discussed below, the bicycle electric device14comprises an electronic indicator34that is configured to generate a user signal indicating that the bicycle crank16is at a predetermined position, as will be discussed below. Therefore, the detecting system26further comprising the electronic indicator34configured to indicate that the crank16is in the predetermined position. Upon the crank16reaching the predetermined position, the detecting device18is configured to determine the inclination angle of the crank16when in the detecting state based on information relating to the image of the crank16that is captured by the camera20. As best seen inFIGS.3and4, the bicycle electric device14further comprises a sensor26that is configured to be provided on the bicycle crank16. Therefore, the detecting system26further comprises the sensor26that is provided on the crank16. In the illustrated embodiment, the sensor is a position sensor36that is configured to detect an object (e.g., a magnet38) that is provided to the bicycle frame F in a detecting state where the crank16is arranged at a predetermined position with respect to the bicycle frame F, as best seen inFIGS.3and4. Further, as shown inFIG.2, the bicycle electric device14further comprises the magnet38that is configured to be mounted on the bicycle frame F. The magnet38actuates the position sensor36to indicate that the crank16is in the predetermined position. The detecting device18determines the angle of the crank16when the crank16is in the predetermined position, as will be further discussed below. The electronic indicator34, the sensor and the magnet38can be positioned on the bicycle crank assembly12in a manner similar to that described in U.S. Pat. No. 10,475,303. In the illustrated embodiment, the electric device further includes a second storage40, as seen inFIG.2. Therefore, the detecting system26further comprises the second storage40provided to the crank16assembly. The second storage40is configured to store the inclination angle transmitted by the detecting device18as a reference angle, as will be discussed below. The second storage40device is operatively coupled to the crank16. As discussed below, the second storage40stores various data and/or programs that are used in connection with providing pedaling information to a rider or a user. The second storage40device can be a ROM (Read Only Memory) device and RAM (Random Access Memory) device or flash drive. The bicycle electric device14further comprises a wireless communicator42that enables the bicycle electric device14to wirelessly communicate with the detecting device18. Therefore, the detecting system26further comprising the wireless communicator42in electronic communication with the detecting device18so as to transmit a signal to the detecting device18. The signal indicates that the crank16is in the predetermined position. The detecting device18can be programmed to automatically determine the inclination angle upon receiving the signal, as will be discussed below. The wireless communicator42is preferably disposed on a printed circuit board PCB that is disposed in the housing unit32. As mentioned above, the housing unit32is mounted to the bicycle crank16. In this way, the wireless communicator42is operatively coupled to the bicycle crank16. The wireless communicator42can be equipped with Bluetooth technology, including Bluetooth low energy, or include the wireless protocol ANT+. The bicycle electric device14can also include an antenna (not shown) to transmit information from the bicycle electric device14and to receive information from the cycle computer CC and the detecting device18. The term “wireless communicator” as used herein includes a receiver, a transmitter, a transceiver, a transmitter-receiver, and contemplates any device or devices, separate or combined, capable of transmitting and/or receiving wireless communication signals, including shift signals or control, command or other signals related to some function of the component being controlled. The wireless communication signals can be radio frequency (RF) signals, ultra-wide band communication signals, or Bluetooth communications or any other type of signal suitable for wireless communications as understood in the bicycle field. Here, the wireless communication communicator can be a two-way wireless communication unit having a receiver and a transmitter. As shown inFIG.2, the detecting system26of the illustrated embodiment preferably further comprises a cycle computer CC. The cycle computer CC is configured to wireless communicate with the bicycle electric device14and the detecting device18as discussed below. The cycle computer CC has a display that is configured to receive the angular force information calculated by detecting device18and is configured to display the angular force information on the display. The cycle computer CC is in communication with the detecting device18and/or the electric device14to receive information regarding the condition of the bicycle crank16and to display pedaling information on the display of the cycle computer CC. In the illustrated embodiment, the detecting device18can include a mobile (external) device that is provided to be used with the bicycle10. Examples of the detecting device18include a smartphone, a tablet or a personal computer. Preferably, as stated, the detecting device18includes at least one software application that is installed to detect, measure and/or send information regarding the crank angle to the second storage40or to the cycle computer CC. The inclinometer22of the detecting device18measures the inclination angle of the crank16when the crank16is at a predetermined position, as will be further discussed below. Therefore, the detecting system26further comprises the inclinometer22configured to detect the inclination angle of the crank16. The electronic controller ECU is configured to determine the angle of the crank16based on the information relating to the image and the inclination angle. The detecting device18is preferably further provided with an accelerometer A and a gyroscope G. Hereinafter, the term “inclination angle” or “inclination state” of the crank16refers to an angle of the bicycle crank16(e.g., the crank arms16A and/or16B) with respect to a flat plane with the bicycle10disposed in an upright position on a flat (level) surface, and the bicycle crank16is installed on the bicycle10. The bicycle10can be placed on ground having an incline as long as the incline is a flat surface, as will be further discussed below. It has been found that riders would like to be informed of the angular force components of the pedaling force during riding. In order to determine these angular force components, the inclination angle of the bicycle crank16may be required. The user can utilize the detecting device18having the inclinometer22to determine the inclination. The detecting device18is in communication with the bicycle electric device14and/or the cycle computer CC to transmit information regarding the calculated crank angle. The bicycle electric device14then transmits the information to the sensor circuit30that will process the information to generate angular force information related to pedaling. Alternatively, the cycle computer CC can also include a processor that receives information from the detecting device18regarding the crank angle. It will be apparent to those skilled in the bicycle art from this disclosure that the various electrical components provided on the bicycle10and the detecting device18can be electric communication in a variety of ways and routes, which are not limited to the embodiment shown. The inclinometer22of the detecting device18is capable of measuring the inclination angle or the crank angle of the crank16when the crank16is at the predetermined angular position. The inclinometer22is capable of measuring the angle of the crank16with respect to the force of gravity. External accelerations like rapid motions, vibrations or shocks can introduce errors in the tilt measurements of the inclinometer22. Thus, the inclinometer22includes at least one of the accelerometer A and the gyroscope G to overcome this problem. The electronic controller ECU of the detecting device18includes an external device processor that is programmed to use one or both of the signals produced by the accelerometer A and the gyroscope G to obtain a value of the crank angle. The inclinometer22can be controlled by the electronic controller ECU to determine the inclination angle of the crank16once the camera20is operated to capture the image of the crank16. Thus, the electronic controller ECU of the detecting device18is configured to obtain information relating to the image of the crank16where the crank16is arranged at the predetermined position. That is, the electronic controller ECU is programmed to determine the inclination angle of the crank16from the image acquired by the camera20. The electronic controller ECU is preferably a microcomputer that includes one or more processor and the first storage24(i.e., a computer memory device). The memory is any computer storage device or any computer readable medium with the sole exception of a transitory, propagating signal. For example, the memory can be nonvolatile memory and volatile memory, and can includes a ROM device, a RAM device, a hard disk, a flash drive, etc. As stated, the detecting device18includes the camera20configured to capture information regarding the bicycle crank16. In particular, the camera20can capture an image of the crank16with respect to the bicycle frame F. The camera20can also capture an image of the crank16with respect to the flat ground that supports the bicycle10, as seen inFIG.10. Therefore, the detecting device18is preferably equipped with one or more sensor(s) and camera circuitry capable of capturing still and video images. In the illustrated embodiment, the detecting device18further comprises a light detection and ranging detector (LIDAR44) configured to obtain the information relating to the image. The LIDAR44is capable of using light to track the position of objects. Specifically, the LIDAR44is capable of measuring how quickly light (specifically laser light) takes to hit the object (e.g., the crank16) and come back again, the position of that object can be determined. The LIDAR44is also capable of registering the angle of the reflected laser light to generate a three-dimensional image of an object that the LIDAR44is directed at. The images captured by the camera20and the LIDAR44can be processed to generate images by video codec(s), and/or the processor, and/or graphics hardware, and/or a dedicated image processing unit incorporated within the camera circuitry. The images captured by the camera20and/or the LIDAR44be stored in the memory and/or the first storage24of the detecting device18. The memory can include one or more different types of media used by processor, graphics hardware, and image capture circuitry to perform device functions. For example, memory may include memory cache, ROM, and/or RAM. The first storage24of the detecting device18can be any a non-transitory computer readable medium such as a ROM device, a RAM device, a hard disk, a flash drive, etc. The first storage24is configured to store settings, programs, data, calculations and/or results of the processor(s). That is, the electronic controller ECU can include a program or an application that controls the camera20to capture the image of the bicycle crank16once the bicycle crank16is in the predetermined position and to have the processor determine the inclination angle of the crank16based on the image. In the illustrated embodiment, the first storage24is configured to store at least one reference image46of the crank16. More particularly, the first storage24is configured to store a plurality of reference images46. The reference images at least include an outer shape of the crank16respectively, as will be further discussed below. For example, the reference images46can include an outline or silhouette that corresponds to an outer shape of the bicycle crank16, as seen inFIG.7. The reference images46can further include an outer shape, outline or silhouette of the bicycle10with the bicycle crank16installed thereon, as seen inFIG.9. Therefore, the detecting device18includes pre-stored reference images46that will be used to determine the inclination angle. The first storage24can also store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data. The memory and/or the first storage24can be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. The processor of the electronic controller ECU can execute such computer program code by implementing one or more of the methods described herein. Therefore, the detecting device18preferably includes a software application that can carry out the measurements of the crank angle. Thus, the measuring of the angle of the crank16further includes calculating of the crank angle using the software application of the detecting device18. As stated above, if the bicycle10is on an incline, the crank angle can still be calculated by compensating for the incline. For example, the software application of the detecting device18can be programmed to compensate for the incline. The software application can perform the compensation mechanism by measuring the actual angle of the crank16and also measuring the tilt angle of the bicycle10caused by the incline. The desired crank angle can be calculated by taking the difference of the measured actual angle and the tilt angle. Therefore, the electronic controller ECU is configured to determine the angle of the crank16based on the information related to the image and the information detected by the camera20. As shown inFIGS.1,5and6, the detecting device18further includes an electronic display48that can display information regarding the reference images that are prestored in the first storage24, and/or live images captured by the camera20. The electronic display48can further display other information accessible by the processor of the electronic controller ECU. The electronic display48is preferably a touchscreen that is an assembly of both an input (‘touch panel’) and output (‘display’) device. The touch panel is normally layered on the top of an electronic visual display of an information processing system. The electronic display48can be an liquid-crystal display (LCD), active-matrix organic light-emitting diode (AMOLED) display, or an organic light-emitting (OLED) display. The user can give input or control the information processing system through multi-touch gestures by touching the screen with a special stylus or one or more fingers. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size. The processor of the electronic controller ECU can be any suitable programmable control device capable of executing instructions necessary to carry out or control the operation of the many functions performed by the detecting device18(e.g., such as the processing of images captured by the camera20and/or LIDAR44). The processor can, for instance, control the electronic display48and receive user input from user interface which can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen. The processor can be a system-on-chip such as those found in mobile devices and include a dedicated graphics processing unit (GPU). The processor can be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. The detecting device18is preferably further equipped with graphics hardware such as special purpose computational hardware for processing graphics and/or an assisting processor to process graphics information. The graphics hardware can include one or more programmable graphics processing units (GPUs). As stated, the electronic controller ECU is configured to determine the angle of the crank16based on the information acquired by the camera20and/or the LIDAR44. In particular, the electronic controller ECU of the detecting device18is configured to detect the inclination angle of the crank16and is capable of defining a reference line52based on the image captured by the camera20. The electronic controller ECU is configured to determine the angle based on the reference line52defined by the electronic controller ECU and the inclination angle that is detected by the electronic controller ECU. The electronic controller ECU is configured to determine the angle of the crank16based on the reference line52and the inclination angle, as described below. Referring now toFIGS.4to8, a method of detecting the condition of the bicycle crank assembly12will now be discussed. In particular, referring specifically toFIGS.4and8, a method of arriving at the predetermined position of the crank16will now be discussed. In the illustrated embodiment, the predetermined position of the crank16is a detecting state of the crank16. That is, the method comprises detecting the detecting state where the crank16is arranged at a predetermined position with respect to the bicycle frame F. Therefore, the detecting device18of the illustrated embodiment is configured to determine the inclination angle of the crank16in detecting state. As seen inFIGS.4and5, the user rotates the crank16to the predetermined angular position in step S1. In the illustrated embodiment, the reaching of the predetermined position is determined by the position sensor36that is provided on the crank16. For example, the user can rotate the crank16from the position ofFIG.4to the position ofFIG.5, which is an illustration of the predetermined position of the crank16. Therefore, the method for detecting the condition of the bicycle crank assembly12comprises detecting the predetermined position using the sensor36provided to the bicycle crank assembly12. In other words, the method comprises detecting the detecting state where the crank16is arranged at the predetermined position with respect to the bicycle frame F. As stated, the position sensor36is actuated by the magnet38, which is mounted on the bicycle frame F. In particular, when the position sensor36is within proximity of the magnet38, the electronic indicator34indicates that the predetermined angular position has been reached. Thus, in step S2, the user receives an indication indicating that the crank16is at the predetermined angular position. The indication can be in many forms. For example, the indication can comprise lighting. The indication can also comprise a sound indication. The indication can also comprise a combination of lighting and sound indication. Therefore, the method for detecting the condition of the bicycle crank assembly12further comprises receiving the indication from the electronic indicator34. The indication indicates that the crank16is in the predetermined position. Therefore, the predetermined position is a position in which the electronic indicator34generates the indication. In step S3, the user stops rotation of the crank16upon reaching the predetermined angular position. Now, the electronic indicator34sends the indication, and the user knows to maintain the crank16in the predetermined angular position. Steps S1to S3comprise the method of arriving at the predetermined position of the crank16that is the detecting state of the crank16. In the illustrated the method for detecting the condition of the bicycle crank assembly12preferably further comprises obtaining an information relating to an image of the crank16with respect to the bicycle frame F using the detecting device18in the detecting state where the crank16is arranged at the predetermined position with respect to the bicycle frame F. In particular, the method can include a plurality of inclination angle determination methods, as seen inFIG.6. That is, the electronic controller ECU of the detecting device18can be preprogrammed with one or more software applications for the user to select a preferred way of determining the crank angle. As shown inFIG.6, the electronic controller ECU can be preprogrammed with at least four ways or methods of determining the inclination of the crank16using the detecting device18. However, it will be apparent to those skilled in the art from this disclosure that the electronic controller ECU of the detected device is not limited to the methods listed. Thus, it will be apparent to those skilled in the bicycle field from this disclosure that the detecting device18can be programmed with additional methods of determining the crank16when the crank16is in the predetermined position as needed and/or necessary. For example, the user can select using a crank silhouette, shape or outline for determining the inclination of the crank16. Therefore, the detecting system26proceeds to step S51in which the user receives the reference image46. Thus, the method for detecting the condition of the bicycle crank assembly12further comprises using the detecting device18to access the reference image46of the crank16after receiving the indication. As stated, the reference image(s)46are prestored in the first storage24of the detecting device18and can include a crank silhouette, shape or outline, such as that shown inFIG.6. Thus, the reference image is accessed from the first storage24of the detecting device18. As seen inFIG.7, the electronic display48is configured to display at least one of a plurality of the reference images46A (two examples of references images46A and46B are illustrated inFIG.7). In particular, the first storage24can include different types or models of crank assemblies such that the user can select the appropriate crank model that corresponds with the crank16that is installed to the bicycle10, though only two are illustrated for simplicity. Thus, the electronic display48is configured to display at least one of the plurality of the reference images46. As shown, the electronic controller ECU is configured to control the electronic display48to display the first reference image46A that is selected from the plurality of reference images46. Thus, the electronic controller ECU is configured to control the electronic display48to display the first reference image46A that was selected from the plurality of reference images46displayed inFIG.6on the screen inFIG.7to compare the first reference image46A with the live image47of the crank16. Therefore, the electronic display48will display the reference image46with a live image47of the crank16that is being captured by the camera20, such as seen inFIG.5. The method for detecting the condition of the bicycle crank assembly12further comprises displaying the live image47of the crank16with the reference image46A concurrently on the electronic display48provided to the detecting device18, as seen inFIG.5. In other words, the electronic controller ECU of the detecting device18is configured to concurrently display the at least one reference image46A and the live image47of the crank16prior to capturing the image of the crank16. In step S51A, the user will align the live image47of the crank16screened by the camera20with the reference image46A on the electronic display48. For example, as shown inFIG.5, the live image47and the reference image46A are substantially aligned on the electronic display48. Thus, the method for detecting the condition of the bicycle crank assembly12further comprises comparing the reference image46A with the live image47of the crank16using the detecting device18. In step S6, once the reference image46and the live image47are aligned or substantially aligned, the camera20can capture the image of the crank16. The camera20can be configured to automatically capture the image once the detecting device18senses that the reference image46and the live image47are aligned or substantially aligned. Alternatively, the user can capture the image by operating the camera20. In step S7, the inclination angle is measured using the detecting device18based on the information related to the image of the crank16that was captured by the camera20. In the illustrated embodiment, measuring of the angle of the crank16includes using the detecting device18while the crank16is in the predetermined angular position. As stated, the inclinometer22is configured to detect the inclination angle of the crank16. The electronic controller ECU is configured to determine the inclination angle of the crank16with respect to the bicycle frame F based on the information obtained by the electronic controller ECU and the inclination angle. Thus, the measuring of the inclination angle of the crank16preferably further includes calculating of the crank angle using software application of the detecting device18. In step S8, the user then transmits information with respect to the angle of the crank16using the detecting device18to the crank16which has the second storage40that will store the inclination angle as a reference angle. Therefore, the detecting device18is configured to transmit the inclination angle to the second storage40device that is provided to the bicycle crank16. The second storage40is configured to store the inclination angle as the reference angle. The reference angle will be used by the sensor circuit to determine the strain forces acting on the bicycle pedals P. In the illustrated embodiment, the crank16is an example of a bicycle component having the storage device that can receive the crank angle information. It will be apparent to those skilled in the art from this disclosure that the condition of the bicycle crank16can be transmitted to another bicycle component having a storage. For example, the detecting device18can transmit the crank angle information to the cycle computer CC for displaying on the display of the cycle computer CC. Referring toFIGS.8to10, after steps S1to S3, the user can select another preferred way of determining the crank angle from the methods that are listed inFIG.6. For example, the user can select using a bicycle silhouette, shape or outline for determining the inclination of the crank16. Therefore, the detecting system26proceeds to step S53in which the user can receives a reference image50that includes a bicycle silhouette, shape or outline. The reference image50of the bicycle10can be prestored in the first storage24of the detecting device18and can include a bicycle silhouette, shape or outline, such as that shown inFIG.9. Alternatively, in this method, the reference image50can be a live image that is detected by the camera20in step S52. Thus, step S52includes either accessing the reference image50from the first storage24or detecting the reference image50as a live image using the camera20. In step S52A, the detecting system26will concurrently display the reference image50of the bicycle outline concurrently with a live image47of the crank16that is detected by the camera20, as seen inFIG.10. Therefore, the electronic display48displays the reference image50of the bicycle outline concurrently with the live image of the crank16. In particular, the electronic display48concurrently displays the reference image50and the live image47of the crank16prior to capturing the image of the crank16. By concurrently displaying the images47and50of the crank16and the bicycle10on the electronic display48, the detecting system26can compare the reference image50of the bicycle10with the live image47of the bicycle10having the crank16using the electronic detecting device18. In step S52B, the camera20can capture the image of the crank16. The user can capture the image by operating the camera20. In step S52C, the user creates a reference line52for the detecting device18. In particular, the user creates the reference line52on the electronic display48after the camera captures the image of the crank16, as seen inFIG.10. The inclination angle of the crank16can be determined based on the reference line52. As seen inFIG.10, the reference line52can be created on the bicycle frame F. For example, the reference line52can include a first reference line52A that connects the axles of the front and rear wheels in a straight line. The reference line52can include a second reference line52B that connects two corresponding portions of the front and rear wheels (e.g., a top point of the front and rear wheels), as seen inFIG.10. Therefore, the method for detecting the condition of the bicycle crank assembly12further comprises creating a reference indication that includes creating the reference line52on the electronic display48. Alternatively speaking, the method further comprises creating the reference indication (e.g., the reference line52) on the detecting device18. Thus, the electronic control is configured to detect the inclination angle of the crank16and define the reference line52based on the image received by the camera20. It will be apparent to those skilled in the bicycle art from this disclosure that the reference line52can also include additional reference lines or reference indications connecting different parts of the bicycle10just so long as the reference line52forms a flat plane on the electronic display48. Next, the detecting system26proceeds to step S7, which the crank angle is measured using the detecting device18, as described for step S7above. The method for detecting the condition of the bicycle crank assembly12further comprises measuring the inclination angle using the detecting device18based on the information and the reference indication. In particular, the electronic controller ECU of the detecting device18can be programmed with a protractor that can calculate or detect the angle between the crank16and the reference line52to determine the angle of the crank16. Thus, the electronic controller ECU of the detecting device18is configured to detect the inclination angle of the crank16and define the reference line52based on the image captured by the camera20. In step S8, the user then transmits information with respect to the angle of the crank16from the detecting device18to the crank16which has the second storage40device that will store the crank angle information, as described for step S8, above. Referring toFIGS.8and10, after steps S1to S3, the user can select another preferred way of determining the crank angle from the methods that are listed inFIG.6. For example, the user can select using a live image of the surrounding area A of the bicycle10for determining the inclination of the crank16. In step S53, the user can receive an image (similar to the reference image50) of the bicycle silhouette, shape or outline along with a surrounding area A of the bicycle10. In the illustrated embodiment, the surrounding area A of the bicycle10will at least include a surface on which the bicycle10sits in an upright condition, as seen inFIG.10. In particular, the camera20can detect the image of the surrounding area A in step S53. In step S53A, the detecting system26will concurrently display the surrounding area A of the bicycle concurrently with a live image (similar to the live image47) of the crank16on the electronic display48, as seen inFIG.10. Therefore, the electronic display48displays the surrounding area A having the bicycle10concurrently with the live image47of the crank16. In particular, the electronic display48displays the live image47of the surrounding area A concurrently with the live image of the crank16. In step S53B, the camera20can capture the image of the surrounding area A. The user can capture the image by operating the camera20. In step S53C, the user creates a third reference line52C on the electronic display48on the captured image, such as the third reference line52C seen inFIG.10. The inclination angle of the crank16can be determined based on the third reference line52C. As seen inFIG.10, the third reference line52C can be drawn along the surface of the ground on which the bicycle10sits. Therefore, in step S53B, the reference line52C is created on the surrounding area A, with the reference line52C being considered a reference indication, similar to that described in steps S52to S52B above. The detecting system26then proceeds to step S7in which the crank angle is measured using the detecting device18, as described for step S7above. The method for detecting the condition of the bicycle crank assembly12further comprises measuring the inclination angle using the detecting device18based on the information and the reference indication. Thus, the electronic controller ECU of the detecting device18is configured to detect the inclination angle of the crank16and define the reference line based on the image captured by the camera20. The electronic controller ECU can similarly use the programmed protractor to determine the angle of the crank16with respect to the reference line52C. In step S8, the user then transmits information with respect to the angle of the crank16from the detecting device18to the crank16which has the second storage40device that will store the crank angle information, as described for step S8, above. Referring toFIG.8, after steps S1to S3, the user can select another preferred way of determining the crank angle from the methods that are listed inFIG.6. For example, the user can select using LIDAR44of the detecting device18for determining the inclination of the crank16. In step S54, the user accesses the LIDAR44system of the detecting device18. Then, in step S54A the user directs the LIDAR44to the crank16of the bicycle crank assembly12. Therefore, the method for detecting the condition of the bicycle crank assembly12further comprises accessing the LIDAR44on the detecting device18to measure the inclination angle of the crank16. As stated, the LIDAR44can create an image of the crank16based on the distance from the detecting device18to the crank16. Thus, the inclination angle is determined based on the image of the crank16created by the LIDAR44. In step S6, the processor of the electronic controller ECU can create/capture an image of the crank16based on the information received by the LIDAR44. In step S7, the crank angle is measured using the detecting device18, as described for step S7above. In step S8, the user then transmits information with respect to the angle of the crank16from the detecting device18to the crank16which has the second storage40device that will store the crank angle information, as described for step S8, above. In the illustrated embodiment, the electronic controller ECU of the detecting device18can alternatively be programmed to determine the crank angle by a default method. That is, the user does not select a preferred crank angle measuring method, as seen inFIG.6. Rather, the detecting system26proceeds directly to a default program. As seen inFIG.11, a first set of default steps are illustrated: S100, S101, S102, S103, S104, S105, S106and S107. Steps100to S107correspond to the steps S1, S2, S3, S51, S51A, S6, S7and S8, ofFIG.8respectively and will not be further described for brevity. The electronic controller ECU can be preprogrammed with the set of default steps to determine the crank angle. As seen inFIG.12, a second set of default steps are illustrated: S200, S201, S202, S203, S204, S205, S206, S207and S208. Steps200to S208correspond to the steps S1, S2, S3, S52, S52A, S52B, S52C, S7and S8, ofFIG.8, respectively. The electronic controller ECU can be preprogrammed with the second set of default steps to determine the crank angle. As seen inFIG.13, a third set of default steps are illustrated: S300, S301, S302, S303, S304, S305, S306, S307and S308. Steps S300to S308correspond to steps S1, S2, S3, S53, S53A, S53B, S53C, S7and S8ofFIG.8, respectively. The electronic controller ECU can be preprogrammed with the third set of default steps to determine the crank angle. As seen inFIG.14, a fourth set of default steps are illustrated: S400, S401, S402, S403, S404, S405, S406and S407. Steps S400to S407correspond to steps S1, S2, S3, S54, S54A, S6, S7and S8ofFIG.8, respectively. The electronic controller ECU can be preprogrammed with the fourth set of default steps to determine the crank angle. In understanding the scope of the present disclosure, the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. The foregoing also applies to words having similar meanings such as the terms, “including”, “having” and their derivatives. Also, the terms “part,” “section,” “portion,” “member” or “element” when used in the singular can have the dual meaning of a single part or a plurality of parts unless otherwise stated. As used herein, the following directional terms “frame facing side”, “non-frame facing side”, “forward”, “rearward”, “front”, “rear”, “up”, “down”, “above”, “below”, “upward”, “downward”, “top”, “bottom”, “side”, “vertical”, “horizontal”, “perpendicular” and “transverse” as well as any other similar directional terms refer to those directions of a bicycle in an upright, riding position and equipped with the detecting system. Accordingly, these directional terms, as utilized to describe the detecting system should be interpreted relative to a bicycle in an upright riding position on a horizontal surface and that is equipped with the detecting system. The terms “left” and “right” are used to indicate the “right” when referencing from the right side as viewed from the rear of the bicycle, and the “left” when referencing from the left side as viewed from the rear of the bicycle. Also it will be understood that although the terms “first” and “second” may be used herein to describe various components these components should not be limited by these terms. These terms are only used to distinguish one component from another. Thus, for example, a first component discussed above could be termed a second component and vice-a-versa without departing from the teachings of the present disclosure. The term “attached” or “attaching”, as used herein, encompasses configurations in which an element is directly secured to another element by affixing the element directly to the other element; configurations in which the element is indirectly secured to the other element by affixing the element to the intermediate member(s) which in turn are affixed to the other element; and configurations in which one element is integral with another element, i.e. one element is essentially part of the other element. This definition also applies to words of similar meaning, for example, “joined”, “connected”, “coupled”, “mounted”, “bonded”, “fixed” and their derivatives. Finally, terms of degree such as “substantially”, “about” and “approximately” as used herein mean an amount of deviation of the modified term such that the end result is not significantly changed. While only selected embodiments have been chosen to illustrate the present disclosure, it will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the scope of the invention as defined in the appended claims. For example, unless specifically stated otherwise, the size, shape, location or orientation of the various components can be changed as needed and/or desired so long as the changes do not substantially affect their intended function. Unless specifically stated otherwise, components that are shown directly connected or contacting each other can have intermediate structures disposed between them so long as the changes do not substantially affect their intended function. The functions of one element can be performed by two, and vice versa unless specifically stated otherwise. The structures and functions of one embodiment can be adopted in another embodiment. It is not necessary for all advantages to be present in a particular embodiment at the same time. Every feature which is unique from the prior art, alone or in combination with other features, also should be considered a separate description of further inventions by the applicant, including the structural and/or functional concepts embodied by such feature(s). Thus, the foregoing descriptions of the embodiments according to the present disclosure are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. | 47,201 |
11858582 | DETAILED DESCRIPTION As illustrated inFIGS.1to5, an electric kickboard includes a wheel frame WF in which an electrically driven wheel WH is mounted. The wheel frame WF includes a deck FH on which a user stands, and a frame bracket FB is formed on the deck. A moving bar130that is bent in a radial direction when an external force is applied and then restored is formed at an upper end of the frame bracket FB. The moving bar130is bent within a range from 0° to 10° in a radial direction when an external force is applied and then restored again. The moving bar130is made of a material such as a coil spring or an elastic bar. A seat frame120is coupled to an upper end of the moving bar130, and a seat110is coupled to an upper end of the seat frame120. InFIG.3, an impact absorbing device100, which is separated from the electric kickboard, is disassembled into the seat110, the seat frame120, the moving bar130, and the frame bracket FB. As illustrated inFIG.3, the impact absorbing device100is formed by sequentially coupling the seat110, the seat frame120, the moving bar130, and the frame bracket FB. The seat frame120, as a main component of the impact absorbing device100, includes a seat bar121and a seat frame housing124. The seat frame120serves to absorb an impact force applied to the seat110by a spring built therein. As illustrated inFIGS.4and5, the seat frame housing124having an empty inner space is provided. The empty inner space is a seat bar mounting chamber125. The seat bar121and a spring guide SG are disposed in the seat bar mounting chamber125. Here, the seat bar121is assembled to the seat frame housing124, and the spring guide SG is formed at the moving bar130. A partition wall DW is formed at an inner wall of the seat frame housing124. The seat bar121includes a seat bar body121aand a seat bar support121b. The seat bar body121ahas an opened hollow lower portion, and a spring mounting chamber122is formed in the hollow portion. A first spring FS is installed in the seat bar mounting chamber125. The first spring FS is installed to surround the spring guide SG. The first spring is disposed between a bottom of the seat frame housing124and a lower end of the seat bar to elastically support the seat bar121. The seat bar support121bis integrated with the seat bar body121ato move together with the seat bar body121awhen the seat bar body121aascends and descends. However, ascending of the seat bar body121ais restricted as the seat bar body121ais caught by the partition wall DW. The first spring FS has a compression amount that is determined when a load is applied to the seat bar121according to a spring constant thereof. Since the spring constant decreases as a deflection increases when the same load is applied, a coil spring having a small spring constant is selected in order to sensitively react to load variation, and a coil spring having a great spring constant is selected in order to insensitively react to the load variation. A second spring SS is installed in the spring mounting chamber122. The second spring SS elastically supports the seat bar121such that an upper end thereof supports an upper end inner wall of the spring mounting chamber122, and a lower end thereof is supported by an upper end of the spring guide SG. A compressive force (impact force) applied to the seat bar121compresses the second spring SS mounted to the spring mounting chamber122as described above and the first spring FS mounted to surround the spring guide SG, and this impact force is absorbed as the first and second springs FS and SS react thereto. The compressive force applied to the seat bar121is absorbed by the elastic reaction force generated as described above. Thus, since the compressive force applied to the seat bar121is distributed by the first spring FS and the second spring SS, when the same impact force is applied, a compression length decreases, and a restoration distance also decreases more than a case of using one spring. That is, the compressive force (impact force) may be further smoothly absorb while a vertical vibration decreases than the case of using one spring. As the spring constant of the second spring SS is designed to be less than that of the first spring FS, when the load applied to the seat bar121is initiated, the second spring SS having the small spring constant firstly sensitively reacts to absorb the load, and when the load increases, the first spring FS and the second spring SS are compressed together to distribute the load and then restored. Also, since the first spring FS and the second spring SS have different spring constants, the spring having a relatively slow reaction speed delays a reaction speed of the spring having a relatively fast reaction speed. The seat110is coupled to an upper end of the seat bar support121b. The seat110has a shape suitable for supporting a weight of an electric kickboard user who sits on the seat while standing up. The seat110is used to be inserted between hips. The weight of the user is applied to the seat110through the hips, and then a compressive force is applied to the first spring FS and the second spring SS. Hereinafter, an operation of the impact absorbing device100according to the present invention will be described. As illustrated inFIGS.1to5, when the user steps on the deck FH installed at both sides of the wheel frame WF, the impact absorbing device100fixed at an upper end of the wheel frame WF is inserted between hips and both legs of the user, and the weight of the user is supported by the seat110, a portion of the weight is loaded to the deck, and the rest of the weight is loaded to the seat110formed at the upper end of the impact absorbing device100. Here, the seat110supports the weight of the user, and although a degree of supporting is varied according to elasticity of the spring, the load applied to a waist or a knee of the user is reduced as much as the elasticity. Since the weight of the user is loaded only to the deck when the impact absorbing device100is not mounted to the wheel frame WF, the weight is concentrated to the knee or the waist. However, when the impact absorbing device100is formed, the weight of the user is loaded to the seat110formed at the upper end of the impact absorbing device100, and the loaded weight is distributed by the seat110. When the electric kickboard is driven in a state in which the weight is distributed to the deck and the seat110, the wheel WH rotates and moves forward When the electric kickboard moves forward, the load applied to the seat110that supports a portion of the weight is varied by change of a road surface on which the wheel runs, speed change and direction change of the wheel, and shaking of the user standing on the deck. That is, the load applied to the seat110is varied many times for a short time less than 1 second. Since the variation of the load applied to the seat may be generated several tens of times even during one second, the seat110is shaken left and right directions or vibrated in a vertical direction when the wheel runs. Since the wheel WH moves forward, an impact force caused by a slight height difference on the road surface or foreign substances existing on the road surface is directly transmitted to the seat110. Thus, when the wheel runs, the seat momentarily moves in the vertical direction to be vibrated while the wheel runs. When the impact force is not absorbed when the seat110is vibrated in the vertical direction, the impact force transmitted to the seat is directly transmitted to the hips of the user, and since the impact force transmitted to the hips is directly transmitted again to the knees, waist, neck, and head of the user, the user receive the physical impact as much. That is, when the strong impact force transmitted to the seat110is not absorbed, the user may be hurt and receive a stress due to repeated strong impacts to feel a sense of fatigue. However, since the seat110smoothly absorbs the impact force, the physical impact applied to the user may be relieved, and the stress may be prevented. Hereinafter, a process of absorbing the impact force applied to the seat110while the wheel is driven will be described. The impact force applied to the wheel WH is transmitted to the seat110through the wheel frame WF, the frame bracket FB, the moving bar130, and the seat bar121. When the wheel WH meets a descending stepped portion while passing a road surface, the entire electric kickboard moves down, but the human body on the seat110momentarily floats and is dropped with an accelerating force due to the gravity to apply greater load to the seat than that before passing the stepped portion. When the great load is applied to the seat, as the seat bar121is pressed, the first spring FS is compressed, and the second spring SS is simultaneously compressed to momentarily absorb the load. Since the moving bar130may be restored by moving by 0° to 10° in all radial directions due to flexibility thereof, when eccentricity is generated when the direction or speed of the electric kickboard is changed, the moving bar130is restored by elastically moving in the radial direction to allow smooth riding. As described above, when the impact force arrives at the seat110as the wheel WH moves forward, the impact is absorbed as the seat bar121simultaneously compresses the first spring FS and the second spring SS. Thus, since riding in a state in which the weight is loaded to the seat while standing up is possible, riding the electric kickboard is not difficult even for a long time riding, and the impact force applied to the knees, the waist, or the neck is remarkably reduced to allow comfortable riding. FIGS.7,8, and9are views illustrating a state in which a self-power generating device using a weight of a user is installed on a seat frame housing124according to another embodiment of the present invention. One pair of rack moving grooves410, which face each other, are opened in a longitudinal direction at upper end both sides of the seat frame housing124. Stator bodies420are coupled to an outer surface of the seat frame housing124so as to be disposed at central portions of the rack moving grooves410Each of the stator bodies420has a rounded inner surface so as to closely attach the cylindrical seat frame housing124, and a stator430around which a coil is wound is formed at each of inner both sides of each of the stator bodies420. Rotators440are rotatably installed to the stators430, respectively, and the rotators440are coupled with each other in an integrated manner by a rotation shaft450. An one-way gear460transmitting a load only when rotates in a forward direction and rotating idly when rotates in a reverse direction is formed between the rotation shaft450and the rotator440. A pinion gear470is formed at a central portion of the rotation shaft450, and a rack gear480engaged with the pinion gear470is formed at each of both side surfaces of a seat bar support121b. The self-power generating device is electrically connected with a built-in battery to supply and accumulate generated electricity to the battery. The seat bar support121bis vertically vibrated when the user of the electric kickboard operates the electric kickboard to move in a state in which the user sits on the seat110while standing up, and a pressing force is repeatedly applied in proportion to the load of the user whenever vibrated because the weight is loaded to the seat110. When the seat bar support121bdescends, the pinion gear470engaged with the rack gear480formed at the seat bar support121brotates. When the pinion gear470rotates, the rotator440rotates through the rotation shaft450, and since the rotator440rotates in the stator430, electricity is generated by electromagnetic induction. When the seat bar support121bascends, the pinion gear470engaged with the rack gear480reversely rotates. When the pinion gear470reversely rotates, a load is not loaded to the rotator450because the pinion gear470rotates idly by the one-way gear460disposed between the rotation shaft450and the rotator440. As described above, the rotator440rotates to produce electricity only when the seat bar support121bdescends. The generated electricity is supplied to the battery built in the electric kickboard to be used or accumulated. As described above, since the electricity is produced and accumulated by the self-power generating device, a driving distance of the electric kickboard remarkably increases further than a case of using only the battery built in the electric kickboard. According to the present invention, the impact absorbing device capable of supporting the weight of the user who stands up is installed to appropriately absorb the impact applied to the knee or the waist of the user while riding the electric kickboard, and the seat elastically absorbs shakings or vibrations generated when the electric kickboard is driven to support the weight of the user, thereby remarkably reducing the impact applied to the user through the seat. Also, since the self-power generating device is mounted, electricity may be continuously produced during riding, and the produced electricity may be supplied to and accumulated in the battery built tin the electric kickboard. Although the embodiments of the present invention have been described, it is understood that the present invention should not be limited to these embodiments but various changes and modifications can be made by one ordinary skilled in the art within the spirit and scope of the present invention as hereinafter claimed. | 13,595 |
11858583 | DETAILED DESCRIPTION OF THE DISCLOSURE It should be understood that the term “plurality,” as used herein, means two or more. The term “longitudinal,” as used herein means of or relating to a length or lengthwise direction2, for example a direction running along a length of a tube8as shown inFIG.4, but is not limited to a linear path, for example if the tube is curved or curvilinear. The term “lateral,” as used herein, means situated on, directed toward or running in a side-to-side direction. The term “coupled” means connected to or engaged with, whether directly or indirectly, for example with an intervening member, and does not require the engagement to be fixed or permanent, although it may be fixed or permanent. The terms “first,” “second,” and so on, as used herein are not meant to be assigned to a particular component so designated, but rather are simply referring to such components in the numerical order as addressed, meaning that a component designated as “first” may later be a “second” such component, depending on the order in which it is referred. It should also be understood that designation of “first” and “second” does not necessarily mean that the two components or values so designated are different, meaning for example a first direction may be the same as a second direction, with each simply being applicable to different components. The terms “upper,” “lower,” “rear,” “front,” “fore,” “aft,” “vertical,” “horizontal,” “right,” “left,” “inboard,” “outboard” and variations or derivatives thereof, refer to the orientations of an exemplary bicycle50, shown inFIG.1, from the perspective of a user seated thereon. The term “transverse” means non-parallel. The terms “outer” and “outwardly” refers to a direction or feature facing away from a centralized location, for example the phrases “radially outwardly,” “radial direction” and/or derivatives thereof, refer to a feature diverging away from a centralized location, for example a central axis4of the tube8as shown inFIG.7. Conversely, the terms “inward” and “inwardly” refers to a direction facing toward the centralized or interior location. The term “subassembly” refers to an assembly of a plurality of components, with subassemblies capable of being further assembled into other subassemblies and/or a final assembly, such as the bicycle50. FIG.1illustrates one example of a human powered vehicle on which a bicycle subassembly, shown as a front fork assembly60, may be implemented. In this example, the vehicle is one possible type of bicycle50, such as a mountain bicycle. The bicycle50has a frame52, handlebars54near a front end of the frame, and a seat or saddle56for supporting a rider over a top of the frame. The bicycle50also has a first or front wheel58carried by a front fork subassembly60supporting the front end of the frame, the front fork subassembly60constructed in accordance with the teachings of the present disclosure. The bicycle50also has a second or rear wheel62supporting a rear end of the frame52. The rear end of the frame52may be supported by a rear suspension component61, such as a rear shock. The bicycle50also has a drive train64with a crank assembly66that is operatively coupled via a chain68to a rear cassette70near the hub providing a rotation axis of the rear wheel62. The crank assembly66includes at least one, and typically two, crank arms75and pedals76, along with at least one front sprocket, or chain ring77. A rear gear change device37, such as a derailleur, is disposed at the rear wheel62to move the chain68through different sprockets of the cassette70. In one embodiment, a front gear changer device, such as a derailleur, may be provided to move the chain68through multiple sprockets of the crank assembly. In the illustrated example, the saddle56is supported on a seat post subassembly80, including a tube81having an end portion received in a top of a frame seat tube89of the frame, which defines a socket. A clamping ring91may be tightened to secure the upper seat tube81to the lower frame seat tube89. InFIG.1, the arrow A depicts a normal riding or forward moving direction of the bicycle50. While the bicycle50depicted inFIG.1is a mountain bicycle, the front fork assembly60, including the specific embodiments and examples disclosed herein as well as alternative embodiments and examples, may be implemented on other types of bicycles. For example, the disclosed front fork assembly60may be used on road bicycles, as well as bicycles with mechanical (e.g., cable, hydraulic, pneumatic, etc.) and non-mechanical (e.g., wired, wireless) drive systems. Now referring toFIGS.2and3, the front suspension element, or front fork assembly60ofFIG.1, is shown as isolated from the rest of the bicycle. The front fork assembly60includes a steering tube83configured for attachment to the handlebars54and the bicycle frame52. The front fork assembly60also includes at least one leg configured for rotatable attachment to a front wheel. In the displayed embodiment, the front fork assembly60includes a first leg104and a second leg106. The at least one leg includes a suspension system. The suspension system may include both a damping system, or damper107, and a spring system109. The two systems function together to form the suspension system. In the illustrated embodiment, the first leg104includes the damper107and the second leg106includes the spring system109, although either leg may include the damper and/or spring system. In an embodiment, a front suspension element may include merely a single leg with a damper and spring included in the single leg. The first leg104and/or the second leg106may be constructed of telescoping bars or tubes8,16called stanchions. The first leg104and/or the second leg106may include an upper tube8or stanchion and a lower tube16or stanchion. In one embodiment, the lower tubes16of both the first leg104and the second leg106are formed of a single piece lower tube construction, which includes a bridge18configured to attach the two lower tubes16. The front fork assembly60also may include one or more wheel attachment features108, such as holes or dropouts configured for wheel hub attachment. The front fork assembly60may also include brake attachment features110, configured for attachment to wheel braking devices, such as disk brake calipers. For example, the brake attachment features may include raised protrusions and holes for fastener attachment to the calipers. In an embodiment, such as the illustrated embodiment, the wheel attachment features108and the brake features110are included on a front fork component that is connected to both legs. For example, the front fork component may be a single piece lower tube construction, or fork lower part111, which includes the pair of tubes16. The fork lower part may include wheel attachment features108and/or the brake features110. The single piece lower tube construction may be formed of a single material, such as a magnesium alloy, aluminum alloy, or other materials. In one embodiment, the single piece lower tube construction is formed through a casting processes. Further machining or forming processes may be used to form specific features, shapes, and/or surfaces of the single piece lower tube. The front fork assembly60may also include a piece forming the tops of one or both legs, such as a front fork crown112. The front fork crown112may be formed of a single piece that spans or forms the top of both the first leg104and the second leg106. In one embodiment, the front fork crown is formed of a single material, such as aluminum or other materials. In one embodiment, the front fork crown is formed through a forging processes. Further machining or forming processes may be used to form specific features, shapes, and/or surfaces of the front fork crown, including for example a pair of boss structures22defining downwardly opening sockets20dimensioned and shaped to receive end portions24of the upper tubes8. The term “socket” refers to a structure interfacing with and capturing the tube, and includes structures partially or entirely surrounding a circumference of the tube, and which may allow for the tube to extend entirely there through such that portions of the tube are exposed on both sides of the socket, or may capture the end portion, for example by way of a bottom wall or shoulder. The end portion24provides both an outer surface10and inner surface38. A steerer tube83is secured to a center hub portion93of the front fork crown and extends upwardly therefrom in the longitudinal direction2. The steerer tube83is inserted in and coupled to a head tube85component of the frame52with one or more bearings, otherwise referred to as a headset, which facilitates rotation between the steerer tube83and the head tube85. The front fork assembly60may also include a suspension element control device67. In one embodiment, the suspension element control device may be attached to, or at least partially integrated with, the front fork crown112. The suspension element control device67is configured to modify, adapt, or otherwise change a state of the suspension system. In the illustrated embodiment the suspension element control device is configured to change an operational state, or one or more operating characteristics, of the damper107. As shown inFIG.3, the damper107is a mechanical device configured to dissipate energy input to the suspension component due to impact or impulse forces being applied to the suspension component. Various dampers may include hydraulic, mechanical, or pneumatic damping mechanisms, or combinations of mechanical, pneumatic, and/or hydraulic damping mechanisms. As shown inFIGS.3-7, in one embodiment of a front fork assembly60, the end portion24of each upper tube8is press-fitted into one of the sockets20formed in the front fork crown112. Besides a press/interference fit, or in combination therewith, the tube8may be coupled to the front fork crown112using other techniques, such as by welding, with threads and/or adhesive, and/or combinations thereof. The end portion24has interior threads46that are threadably engaged by a cap44, or actuator housing, having exterior threads. The inner surface of the end portion may include a step portion48disposed radially outwardly from the inner surface38of a main tubular wall, with step portion48defining a second tubular wall configured with the interior threads46and having an upper annular rim that abuts a shoulder defined by the top of the socket20in the front fork crown112. The upper tube8is preferably made of extruded aluminum tubing, for example 7050 or 6066 aluminum alloy. It should be understood that the tube may be made of other materials, including other metals such as steel or titanium. Standard tubing extrusion, forging and drawing manufacturing processes typically result in neutral or near neutral residual compressive stress at an outer surface10,11of the tube8, and also the outer surfaces of the upper seat tube81and/or steerer tube83. When exposed to bending loads during use, the outer surface10of the tube8, or the outer surfaces of tubes81,83, may experiences relatively high tensile stresses. The tube8is a unitary tube, meaning it is a one-piece monolithic tube with any and all portions thereof being integrally formed, for example and without limitation by the extrusion and/or drawing process. While separate unitary tubes may be coupled, for example by welding, threadable engagement, press-fit, and combinations, thereof, the separate tubes joined in such a way do not define an overall unitary tube. As mentioned above, tube8is presented as an overall unitary tube. In another embodiment, the tube may be formed from multiple pieces joined or merged to create the overall tube. The disclosed bicycle subassemblies, including the disclosed front fork assembly60, seat assembly, and/or the bicycle including the interface between the fork steerer tube and frame, solves or improves upon the above-noted and/or other problems and disadvantages with existing and prior known subassemblies. For example, the disclosed front fork assembly60includes the tube8having different material properties, for example different residual compressive stresses at the outer surface10,11of the tube, and at various depths of the tube below the outer surface11, which can extend the fatigue life of the tube8and the front fork assembly60. In one embodiment, the different residual compressive stresses are introduced by cold working a portion (cold worked region)12of the tube8, while maintaining a remaining portion14of the tube in a standard extruded, forged and/or drawn form. In this way, the surface finish of the remaining portion14of the outer surface10is ideally suited to interface with other components, including for example the lower tube16. Likewise, regionalized portions of the seat tube81and steerer tube83, which are unitary, may be cold worked to introduce residual compressive stresses, while other portions are maintained in a standard extruded and/or drawn form. The fatigue life of the tube components exposed to bending may be improved by providing the cold worked region, or zone of residual compressive stress, at the outer surface11of the tube, and at various inwardly radial depths therefrom. The residual compressive stresses reduce the magnitude and impact of the tensile stresses incurred during cyclical bending, thus increasing the fatigue life of the tubing components, such as the tube8, upper seat tube81, and/or steerer tube83. In one embodiment, the tube8is configured with a first tube portion28disposed in the socket20and overlapping with the front fork crown112, and in particular the boss structure22. Due to the limited length (L1) of the overlap between the crown socket20and the tube portion28, the end portion24may experience relatively large bending stresses, with the largest bending stress typically experienced just below the overlapping front fork crown press-fit region. A second tube portion30extends downwardly from the socket20and the junction between the tube8and front crown112. The tube8includes a first region26, defining a connection zone202, having a first material property extending over at least a first portion32of the first tube portion28and at least a first portion34of the second tube portion30, and a second region36having a second material property extending over at least a portion of the remaining portion14of the second tube portion30. In one embodiment, the portions32and28are the same, although it should be understood that the portion32may have a length less than the length of the tube portion28, meaning less than the entire length of the tube portion28overlapping with the socket20and front fork crown112has a material property different than the material property of the standard extruded and/or drawn tube, e.g., the remaining portion14. In one embodiment, the second region36may have a length less than the length of the remaining portion14. The first and second material properties are different. The phrase “material property” refers to the intensive or physical property of the material that is not dependent on the amount of material. In one embodiment, the first material property is a first residual compressive stress at an outer surface11of the tube and the second material property is a second residual compressive stress at the outer surface10of the tube. The first and second portions may also have different residual compressive stresses at other depths of the tube extending radially inwardly from the outer surfaces11,10. In another embodiment, the tube may have more than the first and second material properties, including for example a gradient material property in a transition portion42of the cold worked region12. The residual compressive stresses may be introduced through various techniques, including cold working the targeted cold worked region12of the tube. One type of cold working is effected by shot peening to induce a degree of cold work to the outer surface11of the tubing components, which increases the residual compressive stresses of the material. Other portions of the tube, such as the remaining portion14, may be selectively cold worked, for example masked to ensure that they are not subjected to the cold working. As such, it is possible to cold work only the cold worked region12of the tube exposed to greater bending loads and the associated tensile stresses, with the remainder portion14of the tube undergoing only the standard tubing manufacturing processes. In this way, the cold-working process is implemented on a specified region or regions of the upper tubes. It should be understood that the entirety of the tube may be cold-worked in the process of creating a tube work piece using only the standard tubing manufacturing processes. The tube work piece may then be stress relieved during this process such that the tube has relatively low residual stress. As such, the terms “non-cold worked” and “non-cold working,” and variations thereof, refer to the state of the tube after this initial formation (i.e., post tube creation), including any stress relief, even if the tube was subjected to earlier cold-working and retains some residual stress. The terms “cold-worked” and “cold-working,” and variations thereof, refer to any subsequent/secondary processing and state of the tube to create additional residual stresses post-tube creation. In one embodiment, the cold worked region12includes the entirety of the area of the tube portion28that mates with the socket20and overlaps with the front fork crown112, the portion34below the crown and the additional transitional portion42. The portions34and42are collectively referred to as the exposed cold-worked portions103. In other embodiments, the cold worked region12may not include the entirety of the area overlapping with the crown, may not include any portion below the crown and/or may not include any transitional portion. In another example, residual compressive stresses may be selectively introduced through roller burnishing, the selective location thereof controlled through the disposition of the roller device during the procedure. The upper tube8defines the joint between the lower tube16and the front fork crown112. It may be desirable for the outer surface10of the upper tube, or at least the remaining portion14thereof, to remain relatively smooth such that the upper and lower tubes8,16experience minimal friction there between as the front fork assembly60, or legs104,106, compresses and extends during travel. Accordingly, in one embodiment, the cold working process is only applied to the upper end24of the tube in the cold worked region12, which includes the portion28that is pressed into the crown, and the portions34,42positioned below the crown. As further discussed below, the surface finish of the outer surface11of the cold worked region and the outer surface10of the non-cold worked region are different, such that the non-cold worked region outer surface10is smoother, or less rough, than the outer surface11of the cold worked region12. This roughness differential may be visible to the end user. In other embodiments, the cold worked region12or portions thereof, may be processed, for example by burnishing or deep rolling, such that the outer surfaces10,11have the same, or substantially similar finish after anodizing, meaning any differences between the surface finishes are not readily discernable to the naked eye. In one embodiment, the cold work process is applied before anodizing the regionalized location on the tube known to be highly stressed during application usage. After anodizing, the tubes8are then pressed into the mating parts, for example the front fork crown112. Various types of cold working the tube may be implemented, including shot peening, laser peening, cold rolling, cold forging, deep rolling, swaging and/or roller burnishing, which may be applied to specified regions of the tube and thereby impart residual compressive stresses leading to the higher overall bending fatigue life. The portion28overlapping with the socket20is defined by the first length (L1). The end portion24of the tube includes a cold worked region12having a second length (L2). As shown inFIGS.5and7, the second length (L2) is greater than the first length (L1), with the entirety of the overlapping region being cold worked. As mentioned, it should be understood that only a portion of the overlapping region may be cold worked, for example at the interface between the bottom rim of the crown and the upper tube. The tube has an overall third length (L3) that is greater than the second length (L2). In one embodiment, the non-cold worked region36is adjacent the cold worked region12. In one embodiment, the non-cold worked region36extends for the remaining length of the tube, which is the difference between the third and second lengths (L3-L2). It should be understood that, in other embodiments, additional regions of the tube, separate from the cold worked region12, may also be cold worked, for example at the interface with the lower tube, or along a lower end thereof. The cold worked region12includes a connection zone202and a transition zone204. The transition zone204is defined by the transition portion42of the tube8in one embodiment. The connection zone202, which includes the region26, includes residual compressive stress at the outer surface11of the tube. For example, the connection zone202may have a uniform residual compressive stress at an outer surface11of the tube. In another example, variable or compressive strength value gradients may also be introduced in zones. In one embodiment, the connection zone202has a fourth length (L4) greater than the first length (L1) and less than the second length (L2). The transition zone204has a gradient residual compressive stress defined along a fifth length (L5) of the tube at the outer surface of the tube. In one embodiment, the fifth length (L5) is equal to the difference between the second and fourth lengths (L2-L4). The gradient residual compressive stress transitions from the uniform residual compressive stress of the connection zone202at an outer surface11to a residual compressive stress of the non-cold worked second region36at the outer surface10of the tube8, which may be approximately zero (0), or slightly negative, in one embodiment. The gradient may be linear or non-linear, for example defined by a curve shaped concave up or down, including an exponential function. In one embodiment, the cold worked region12extends below a bottom rim of the front fork crown112, or crown joint, a distance (L6) of between 5 mm and 20 mm, or a length equal to the difference between L2and L1. Stated another way, in one embodiment, the second length (L2) is at least 5 mm greater than the first length (L1), or between 5 mm and 20 mm greater than the first length (L1). In this way, a single, unitary tube includes a portion14having material properties associated with a standard extrusion tubing process and at least another portion12having material properties associated with regionalized cold working, and in particular differing residual compressive stress on a single, unitary tube component, for example at an outer surface10,11thereof and at different radial depths. As shown inFIG.8, the cold work region26of the tube has a ˜400× increase in residual compressive stresses compared to the standard tube formation process on the same tube, e.g., the remainder portion14, or region36, at a depth of 0.050 mm from the outer surface10. The residual compressive stress of the non-cold worked region36is slightly negative, or approximately zero at the outer surface10of the tube, while the uniform residual compressive stress of the cold worked region26is at least minus 100 MPa at the outer surface10. The residual compressive stress may decrease (have a larger magnitude) the deeper the penetration into the thickness of the tube wall, or displacement radially inwardly, from the outer surface10. As such, the term “decrease” refers to the value becoming more negative even as the magnitude of the stress increases. For example as shown inFIG.8, the residual compressive stress may approach minus 400 MPa at a depth of 0.050 mm from the outer surface. The residual compressive stress diminishes (magnitude decreases) and approaches the neutral compressive residual stress at a depth of between 0.25 mm and 0.30 mm. The measurements inFIG.8were recorded for samples of both a regionalized cold worked region and a standard extrusion region at four locations: 2 locations 180 degrees apart at an equidistant 40 mm from the respective (cold worked and non-cold worked) ends of the tube8. Residual stress measurements were taken using x-ray diffraction technique coupled with electrochemical polishing to characterize the following depths from the outer tube surface (mm): 0.0, 0.013, 0.025, 0.05, 0.1, 0.175, and 0.25. Measurements were in adherence to industry standards ASTM E915, ASTM E2860, and SAE HS784. The regionalized cold-working process improves bending fatigue results by increasing residual compressive stresses at the surface11of the material. For example, regionalized shot peening to the upper tube has shown to provide a 6.5× increase in fatigue cycles. The cold worked region12, and at least the region26, has a first roughness at an outer surface11of the tube, while the non-cold worked region36has a second roughness at the outer surface10of the tube. In one embodiment, the first roughness is rougher than the second roughness. In one embodiment, the second roughness has a first value of Ra 0.075 to 0.30 and the first roughness has a first value of Ra 1.475 to 4.080. The second roughness has a second value of Rz 0.75-3.75 and the first roughness has a second value of Rz 7.303-18.504. At least a portion of the cold-worked region has a compressive residual stress of between and including 0 MPa and negative 100 MPa at an outer surface of the tube. The outer surface10of the non-cold worked region36defines at least a portion of a sealing surface of the upper tube8, with the lower end portion of the upper tube8inserted into the upper end portion of the lower tube16. The lower tube16is movably engaged, e.g., through sliding, with the outer sealing surface10. In one embodiment, a method of manufacturing a bicycle component subassembly, such as the front fork assembly60, includes cold working portions32,34,42of the tube8to define the cold worked region12while avoiding cold working of the tube portion14to maintain the non-cold worked region36, and inserting the first tube portion32into the socket20of a bicycle component. At least the portion32of the cold worked region12and the socket20are overlapping, and at least the portions34,42, collectively portion103, of the cold worked region12are not overlapping with the socket20. The tube8may be masked to provide for the regionalized cold working, and may be further masked to provide for the transition region204having a residual compressive stress gradient. The various embodiments of regionalized cold working, with the differential residual compressive stresses, may be applied to other tubular bicycle components, including without limitation the steerer tube83and the seat tube81. The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive. While this specification contains many specifics, these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations and/or acts are depicted in the drawings and described herein in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that any described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, are apparent to those of skill in the art upon reviewing the description. The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter. It is intended that the foregoing detailed description be regarded as illustrative rather than limiting and that it is understood that the following claims including all equivalents are intended to define the scope of the invention. The claims should not be read as limited to the described order or elements unless stated to that effect. Therefore, all embodiments that come within the scope and spirit of the following claims and equivalents thereto are claimed as the invention. Although embodiments have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, and substitutions are possible, without departing from the scope and spirit of the disclosure. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments and examples are intended to be included in this description. Although certain parts, components, features, and methods of operation and use have been described herein in accordance with the teachings of the present disclosure, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all embodiments of the teachings of the disclosure that fairly fall within the scope of permissible equivalents. | 32,921 |
11858584 | The drawings referred to in this description should be understood as not being drawn to scale except if specifically noted. SUMMARY OF EMBODIMENTS One embodiment is a lower fork alignment system that includes: a brace detachably coupling a first lower tube and a second lower tube of a lower fork, wherein the first and second lower tubes include: first and second receiving adjustment mechanism configured for receiving corresponding interfacing mechanisms of the brace, wherein upon receipt of alignment features of the interfacing mechanisms, one of the first lower tube and the second lower tube is enabled to be adjusted in a horizontal direction along a horizontal axis, and the other of the first lower tube and the second lower tube is enabled to be adjusted in a vertical direction along a vertical axis. One embodiment includes a lower fork alignment system for a vehicle. According to the embodiment, the lower fork alignment system includes: a lower fork that includes: a brace coupling a first lower tube to a second lower tube, wherein the first lower tube includes: a first receiving adjustment mechanism disposed at a first end of the first lower tube, wherein the first receiving adjustment mechanism is configured for receiving a corresponding first interfacing mechanism of the brace, wherein the first receiving adjustment mechanism includes: a first alignment feature enabling a horizontal adjustment of the first lower tube relative to the second lower tube and along a horizontal axis. The second lower tube includes: a second receiving adjustment mechanism disposed at a first end of the second lower tube. The second receiving adjustment mechanism is configured for receiving a corresponding second interfacing mechanism of the brace. The second receiving adjustment mechanism includes: a second alignment feature enabling a vertical adjustment of the second lower tube relative to the first lower tube and along a vertical axis. One embodiment includes a brace for coupling a first lower tube to a second lower tube of a lower fork. The brace includes: a first end; a second end; a first interfacing mechanism disposed at the first end, wherein the first interfacing mechanism is configured for being inserted into a corresponding first receiving adjustment mechanism of a first lower tube of a lower fork and includes: at least one raised horizontal rectangular shape configured for being inserted into the first receiving adjustment mechanism of the first lower tube, wherein upon receipt, the first lower tube may be slid in a horizontal direction along a horizontal axis; and a second interfacing mechanism disposed at the second end, wherein the second interfacing mechanism is configured for being inserted into a corresponding second receiving adjustment mechanism of a second lower tube of a lower fork and includes: at least one raised vertical rectangular shape configured for being inserted into the second receiving adjustment mechanism of the second lower tube, wherein upon receipt, the second lower tube may be slid in a vertical direction along a vertical axis. One embodiment includes a lower tube of a lower fork of a vehicle. The lower tube includes: a receiving adjustment mechanism disposed at a first end of the lower tube, wherein the receiving adjustment mechanism is configured for receiving a corresponding interfacing mechanism of a brace. The receiving adjustment mechanism includes: an alignment feature enabling one of a horizontal adjustment and a vertical adjustment of the lower tube relative to another lower tube of the lower fork and adjusted along one of a horizontal axis and a vertical axis, respectively. DESCRIPTION OF EMBODIMENTS The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. Each embodiment described in this disclosure is provided merely as an example or illustration of the present invention, and should not necessarily be construed as preferred or advantageous over other embodiments. In some instances, well known methods, procedures, objects, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present disclosure. This patent application describes the invention in the context of an example embodiment of the lower front fork for a bicycle. However, the teachings and scope of the invention are equally applicable to other lower fork assemblies for any two-wheeled vehicle. Overview of Discussion Embodiments disclosed herein include a lower fork alignment system for coupling two lower fork legs together with a brace and enabling improved alignment of the left and right lower fork legs of a bicycle by allowing for mid-assembly horizontal, vertical and rotational adjustment of the lower fork legs via the brace. Significantly, the brace, the right lower tube and the left lower tube are assembled after being manufactured as separate pieces. During assembly, pieces may be individually adjusted to attain a desired alignment relative to each other. When properly aligned, the telescopic movement of the upper tubes within the lower tubes remains near or at the lowest friction level. Once the lower fork legs are positionally adjusted such that the lower fork legs are aligned within the same horizontal and vertical planes, then embodiments enable the stabilization of these adjusted positions through attachment features found in both the brace and the lower fork legs (e.g., bolt holes, screw holes, glue cavities, etc.). The following discussion will begin with a brief description of a conventional bicycle (SeeFIG.1), a conventional bicycle front lower fork manufactured as a single molded piece (SeeFIG.2), and the problems associated therewith. The discussion turns to a description of various embodiments, including: a brace that is attached to the lower fork legs using horizontal and vertical attachment bars (SeeFIGS.3-10); a brace that couples lower fork legs using a matching positive and negative spline features and is glued onto lower fork legs (SeeFIGS.11-14); and a center bridge that may be used as a brace or as part of a brace of prior described embodiments (SeeFIGS.15-17). FIG.1illustrates an off-road bicycle, or mountain bike100, including a frame114which is comprised of a main frame portion108and a swing arm portion116. The swing arm portion116is pivotally attached to the main frame portion108. The bicycle100includes front and rear wheels104and118, respectively, connected to the main frame108. A seat110is connected to the main frame108in order to support a rider of the bicycle100. The front wheel104is supported by a front fork102which, in turn, is secured to the main frame108by a handlebar assembly106. The rear wheel118is connected to the swing arm portion104of the frame114. A rear shock112is positioned between the swing arm116and the frame108to provide resistance to the pivoting motion of the swing arm116. Thus, the illustrated bicycle100includes suspension members between the front and the rear wheels104and118, respectively, and the frame114, which operate to substantially reduce wheel impact forces from being transmitted to the rider of the bicycle100. FIG.2illustrates the front fork102as being detached from the bicycle100ofFIG.1. The front fork102include right and left legs,202and220, respectively, as referenced by a person in a riding position on the bicycle100. The right leg202includes a right upper tube208telescopingly received in a right lower tube204. Similarly, the left leg220includes a left upper tube214telescopingly received in a left lower tube218. A crown210connects the right upper tube208to the left upper tube214thereby connecting the right leg202to the left leg220of the front fork102. In addition, the crown210supports a steerer tube212, which passes through, and is rotatably supported by, the frame114of the bicycle100. The steerer tube212provides a means for connection of the handlebar assembly106to the front fork102, as illustrated inFIG.1. Each of the right lower tube204and the left lower tube218includes dropouts224and226, respectively, for connecting the front wheel104to the front fork102. An arch216connects the right lower tube204and the left lower tube218to provide strength and minimize twisting thereof. The right lower tube204, the left lower tube218and the arch216are formed as a unitary piece. As the right lower tube204, the left lower tube218and the arch216are formed as a unitary piece, it is not possible to make horizontal and vertical adjustments after the unitary piece has been casted. Therefore, due to the casting process, and the shrinking of the material post-casting, the resulting unitary piece has a tendency to bend. When portions of the unitary piece bends, such as when one or both of the right lower tube204and the left lower tube218slightly bend and/or twist, the right upper tube208and the left upper tube214have difficult sliding in and out of the right and left lower tubes204and218, respectively, since friction occurs there between because the upper tubes208and214rub against the lower tubes204and218. Example Lower Fork Alignment System Embodiments provide for the separate manufacture of a right lower tube, a left lower tube and a brace that connects the right and left lower tubes via receiving adjustment mechanisms, interfacing mechanisms and attachment mechanisms. The lower fork alignment system provided herein enables the horizontal and vertical alignment adjustment of the right lower tube, the left lower tube and/or the brace. Such alignment adjustment possibilities enable the right lower tube and the left lower tube to be adjusted to a desired alignment position relative to each other, thereby reducing the friction that develops due to improperly aligned lower fork tubes or the bending of lower forks due to post-casting stress. FIG.3illustrates a brace coupling a right and left lower tube together, along with receiving adjustment mechanisms attached thereto, in accordance with an embodiment. More specifically,FIG.3shows a front fork300including a right upper tube306telescopically engaged with the right lower tube302, and a left upper tube310telescopically engaged with a left lower tube316. A right receiving adjustment mechanism304and a left receiving adjustment mechanism314are attached to the right lower tube302and the left lower tube316, respectively. In one embodiment, the receiving adjustment mechanisms are molded onto the lower tubes in a casting process. However, in another embodiment, the receiving adjustment mechanisms are attached to the lower tubes in a manner suitable for operation with the vehicle, such as with bolts, screws, glue, etc. In one embodiment, the right receiving adjustment mechanism304includes horizontal alignment features that constitute one or more depressions formed in a horizontal rectangular (bar) shape. The horizontal rectangular (bar) shape depression318formed within the right receiving adjustment mechanism304is configured to receive a raised horizontal rectangular (bar) shape formed on the brace308(as will be explained in detail with respect toFIGS.4-6), such that the raised horizontal rectangular shape partially fills the horizontal bar shape depression318. As shown, the right receiving adjustment mechanism304includes two horizontal rectangular shape depressions. It should be appreciated that embodiments may include one or more horizontal rectangular shape depressions. As will be explained herein in more detail, the raised horizontal rectangular shape of the brace308will fit into the horizontal rectangular shape depression318such that the brace308may be shifted/adjusted horizontally by sliding the brace308in a direction along the horizontal axis324. The raised horizontal rectangular shape is slid within the horizontal rectangular shape depression318until a desired distance between the right lower tube302and the left lower tube316is achieved. Of note, when providing the adjustment, the other end of the brace308is concurrently inserted into the left receiving adjustment mechanism314such that the brace308is coupled with the left lower tube316. In one embodiment, the left receiving adjustment mechanism314includes vertical alignment features that constitute one or more depressions formed in a vertical rectangular (bar) shape. The vertical rectangular (bar) shape depression320formed within the left receiving adjustment mechanism314is configured to receive a raised vertical rectangular (bar) shape formed on the brace308(as will be explained in detail with respect toFIGS.4-6), such that the raised vertical rectangular shape partially fills the vertical bar shape depression320. As shown, the left receiving adjustment mechanism320includes two vertical rectangular shape depressions. It should be appreciated that embodiments may include one or more vertical rectangular shape depressions. As will be explained herein in more detail, the raised vertical rectangular shape of the brace308will fit into the vertical rectangular shape depression320such that the brace308may be shifted/adjusted vertically by sliding the brace308in a direction along the vertical axis326. The raised vertical rectangular shape is slid within the vertical rectangular shape depression320until a desired vertical height of the left lower tube316relative to the right lower tube302is achieved. Of note, when providing the adjustment, the other end of the brace308is concurrently inserted into the right receiving adjustment mechanism304such that the brace308is coupled with the right lower tube302. It should also be appreciated that while the discussion focuses on the features (e.g., horizontal rectangular shape depression318, etc.) attributed to the right receiving adjustment mechanism304and features (e.g., vertical rectangular shape depression320, etc.) attributed to left receiving adjustment mechanism314, the horizontal rectangular shape depression318, in one embodiment, may be formed in the left receiving adjustment mechanism314. Likewise, in one embodiment, the vertical rectangular shape depression320may be formed in the right receiving adjustment mechanism304. Further, the corresponding raised horizontal rectangular shape and the raised vertical rectangular shape are formed on either end of the brace308. FIG.3also show attachment mechanisms312and322, that being screw holes (or a first and second set of screw holes, wherein each “set” may contain one or more screw holes) configured for receiving screws upon the insertion of the raised horizontal and vertical rectangular shapes into the horizontal rectangular shape and vertical rectangular shape depressions,318and320, respectively, formed in the right and left receiving adjustment mechanisms,304and314, respectively. It should be understood that the attachment mechanism for maintaining the brace308firmly attached to the lower fork300may be any suitable means. In non-limiting examples, bolts, screws and/or glue may be used to attach the brace308to the right and left receiving adjustment mechanisms,304and314, respectively. It should be noted that in one embodiment, the attachment mechanism that receives the glue is a cavity capable of receiving and holding the glue such that the glue may bond components together. FIG.4illustrates a rear view of an inner structure of the brace308comprising a right interfacing mechanism408at a first end406and a left interfacing mechanism404at the second end402, in accordance with an embodiment. In one embodiment, the right interfacing mechanism408has formed thereon at least one raised vertical rectangular shape412, and the left interfacing mechanism404has formed thereon at least one raised horizontal rectangular shape410.FIG.5is an enlarged perspective view of section A-A ofFIG.4, showing a raised horizontal rectangular shape of the at least one raised horizontal rectangular shape412, in accordance with an embodiment.FIG.6is an enlarged perspective view of section B-B ofFIG.4, showing a raised vertical rectangular shape of the at least one raised vertical rectangular shape410, in accordance with an embodiment. As described herein, the at least one raised horizontal rectangular shape412is formed such that it fits into the at least one horizontal rectangular shape depression318. Once inserted into the at least one horizontal rectangular shape depression318, the at least one raised horizontal rectangular shape412may be slid horizontally in a horizontal direction along the horizontal axis324, to adjust the right lower tube302in relation to the left lower tube316. The at least one raised vertical rectangular shape410is formed such that it fits into the at least one vertical rectangular shape depression320. Once inserted into the at least one vertical rectangular shape depression320, the at least one raised horizontal rectangular shape410(and hence the left lower tube316) may be slid vertically in a vertical direction along the vertical axis326, to adjust the left lower tube316in relation to the right lower tube302. FIG.7illustrates the left lower tube316and the right lower tube302, having attached thereon the left receiving adjustment mechanism314and the right receiving adjustment mechanism304, respectively, in accordance with an embodiment. FIG.8illustrates, according to an embodiment, the brace308, set of screws802and804for inserting into the brace308and the right and left receiving adjustment mechanisms304and314, respectively, and molded covers806A and806B for covering the set of screws802and804, respectively. With reference now toFIGS.7and8, a method of assembling embodiments provided herein may be described. For example, the first end406of the brace308is placed onto the right receiving adjustment mechanism304. The second end402of the brace308is placed onto the left receiving adjustment mechanism314. Then, the brace308or the right lower tube302is adjusted horizontally along the horizontal axis324and/or the brace308or the left lower tube316is adjusted vertically along the vertical axis326. Once the brace308is adjusted as desired in relation to the right lower tube302and the left lower tube316, then the set of screws802and804are inserted into the attachment mechanisms312and322, respectively. As noted, while in one embodiment screws are used, it should be appreciated that any suitable manner of attachment may be used to attach the brace308to the right lower tube302and the left lower tube316. Once the set of screws806A and806B are inserted into the attachment mechanisms312and322, respectively, then molded covers806A and806B are placed over the visible heads of the set of screws806A and806B, respectively. The molded cover is formed to cover the screws, bolts, etc., and is made of plastic, in one embodiment. The molded cover, in one embodiment, slips over and around the heads of the screws or bolts, and includes any manner of suitable mechanisms enabling attachment to the screws or bolts. For example, in one embodiment, the molded cover may be formed such that the molded cover stretches slightly and its outer edges contain a lip that curves inward and under the molded cover and that enables the molded cover to slip over and around the outer edges of the screws or bolts. In another embodiment, glue is used to hold the molded cover in place. FIGS.9and10illustrate assembled embodiments of the lower fork alignment system, wherein the brace308couples the right lower tube302with the left lower tube316, the set of screws802and804attaches the brace308to the right and left lower tubes302and316, respectively. Molded covers806A and806B are also shown covering (hiding) the heads of the set of screws802and804, respectively. FIG.11illustrates a brace1106for coupling the right lower tube1104and the left lower tube1102, in accordance with an embodiment. The right lower tube1104has a first end1114and the left lower tube1102has a second end1112. The first end1114includes a set of negative splines1108. A negative spline of the set of negative splines1108is a depression within the first end1114that is formed to lie parallel with the vertical axis1116. The brace1106includes the right brace shoulder1110A and the left brace shoulder11106. The inner surfaces (not shown) of the right brace shoulder1110A and the left brace shoulder11106includes a set of positive splines configured for fitting within the set of negative splines1108. A positive spline of the set of positive splines is a raised vertically shaped block and is formed such that the raised positive spline fits within the negative spline depression. According to embodiments, the raised positive splines are smaller in area than the negative spline depressions, such that when the right brace shoulder1110A and the left brace shoulder11106are placed over the first end1114and the second end1112of the right lower tube1104and the left lower tube1102, respectively, the right lower tube1104and the left lower tube1102may be rotated horizontally, vertically and rotationally within the fixture prior to a more permanent attachment mechanism being applied, such as, for example, glue. FIG.12illustrates a front cross-sectional view of the brace coupling the right lower tube1104to the left lower tube1102ofFIG.11, in accordance with an embodiment. As shown, the right lower tube1104and/or the left lower tube1102may be adjusted in the horizontal direction1204. Further, the right lower tube1104and/or the left lower tube1102may be adjusted in the vertical direction1202in relation to the brace1106, in accordance with an embodiment. FIG.13illustrates a side perspective view of the left lower tube1104and the brace1106, in accordance with an embodiment. As shown, the left lower tube1104may be adjusted in the horizontal direction1204. FIG.14illustrates an enlarged front cross-sectional view of the area A ofFIG.12. More particularly,FIG.14shows the left lower tube1104, the second end1112described herein, and the left brace shoulder11106described herein. Further, the glue joint1402is positioned between the left brace shoulder11106and the second end1112, in accordance with one embodiment. The glue joints enable improved alignment of the left and the right lower tubes1102and1104by allowing for horizontal, vertical and rotational adjustment in the fixture prior to glue being set. FIG.15illustrates a brace1506coupling the right lower tube1504to the left lower tube1502, in accordance with an embodiment. The brace1506includes the center bridge1508. The center bridge1508is inserted horizontally at the top of the brace1506. Before the center bridge1508is glued into place, the right lower tube1504and the left lower tube1502may be horizontally, vertically and/or rotationally adjusted. In accordance with an embodiment, the right lower tube1504and the left lower tube1502may be adjusted by being slid in the horizontal direction1510in relation to each other. Additionally, the right lower tube1504and the left lower tube1502may be rotationally adjusted, such as the rotational direction1512. FIG.16illustrates a side perspective view of the left lower tube1502ofFIG.15, in accordance with an embodiment.FIG.16shows that before the glue is set at glue joints1704, the brace1506may be rotationally adjusted in the rotational direction1602or vertically adjusted in the vertical direction1604. FIG.17illustrates the center bridge1508ofFIG.15inserted into the brace1506, which is capable of sliding in the horizontal direction1510. Thus, the bridge1508enables the width adjustment of the lower fork, in accordance with embodiments. In one embodiment, either end of the bridge1508has attached thereto an end cap, such as the end cap1702. In one embodiment, the end cap1702is made of elastomer and functions to seal the hollow center bridge1508. Thus, embodiments provide a variety of lower fork alignment system embodiments, allowing for the separate manufacture of the right lower tube, the left lower tube, and the brace and/or center bridge incorporated therein. By enabling this separate manufacture of these key components, each component may be more precisely and individually aligned relative to other components. The individual alignment mechanisms described herein enable the fork legs to be aligned in the same horizontal and vertical plane, thereby allowing for smoother fork operations. Thus, embodiments provide at least two axis of alignment whereby the brace may be assembled onto the lower fork legs and constrained once the brace is bolted to the lower fork legs. It should be noted that any of the features disclosed herein may be useful alone or in any suitable combination. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be implemented without departing from the scope of the invention, and the scope thereof is determined by the claims that follow. | 25,058 |
11858585 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, motorcycles according to exemplary embodiments will be described with reference to the accompanying drawings. The directions mentioned in the following description are those based on the viewpoint of the rider seated on the motorcycle. First Embodiment FIG.1is a side view of a motorcycle1according to a first embodiment. As shown inFIG.1, the motorcycle1is an example of a straddle vehicle on which the rider is seated in a straddling position. The motorcycle1is a hybrid vehicle. The motorcycle1includes a front wheel2, a rear wheel3(drive wheel), a vehicle body frame4, a front shock absorber6, and a rear shock absorber7(shock absorber). In the present embodiment, the devices contractible to absorb forces exerted on the vehicle body and reduce the up-down motion of the vehicle body are referred to as the front and rear shock absorbers6and7. The front shock absorber6is located between the front wheel2and the front of the vehicle body frame4. The rear shock absorber7is located between the rear wheel3and the rear of the vehicle body frame4. The front shock absorber6is located below the steering shaft and coupled to a bracket8spaced from the front shock absorber6in the up-down direction. The steering shaft connected to the bracket8is supported by a head pipe4ain such a manner as to be angularly movable. The head pipe4ais a part of the vehicle body frame4. A handle9grasped by the hands of the rider is mounted on the steering shaft. A fuel tank10is located behind the handle9, and a seat11on which the rider sits is located behind the fuel tank10. A power unit12serving as a drive source for travel is mounted on the vehicle body frame4and located between the front and rear wheels2and3. In the present embodiment, a structure including the front shock absorber6and suspended by a front portion of the vehicle body frame4in the vicinity of the front shock absorber6is referred to as a front suspension13. In the present embodiment, the front suspension13is a front fork having a bifurcated shape and holding the front wheel2from both sides in the vehicle width direction. A structure including the rear shock absorber7and suspended by a rear portion of the vehicle body frame4in the vicinity of the rear shock absorber7is referred to as a rear suspension14(suspension structure). The power unit12includes an engine E (first element) which is an internal combustion engine serving as a prime mover and a drive motor M (second element) which is an electric motor having a drive shaft and serving as a prime mover. In the present embodiment, the function of the engine E (first function) and the function of the drive motor M (second function) are the functions of drive sources that produce rotational drive power to be transmitted to the rear wheel3. In the present embodiment, a structure located substantially at the center of the motorcycle1in the front-rear direction and supported by the vehicle body frame4is referred to as a supported structure. A lower element of the supported structure is referred to as a first element, and an element of the supported structure that is located above the first element is referred to as a second element. In the present embodiment, the power unit12is the supported structure of the motorcycle1. FIG.2is a left rear perspective view of the power unit12. The engine E includes a cylinder Ea extending upward from a front portion of the crankcase15. The crankcase15includes a main body16protruding rearward from a lower portion of the cylinder Ea. The drive motor M is located behind the cylinder Ea and mounted on the upper surface of the main body16. That is, the drive motor M is located above and aligned with the main body16in the up-down direction. FIG.3is a perspective view of the power unit12ofFIG.2with the drive motor M removed.FIG.4is an enlarged left side view of the drive motor M and its vicinity in the power unit12ofFIG.2. As shown inFIGS.3and4, the main body16of the crankcase15includes an upper wall16f, and the upper wall16fincludes a front mount portion19, a rear mount portion20, and an upper case surface21. The front and rear mount portions19and20are, for example, receiving bases provided with bolt holes and protrude upward from the upper wall16fof the main body16. The front of the motor housing Ma is secured to the front mount portion19by fasteners B inserted from above. The rear of the motor housing Ma is secured to the rear mount portion20by fasteners B inserted from above. That is, the drive motor M is supported by the front and rear mount portions19and20of the crankcase15. The upper case surface21is defined between the front and rear mount portions19and20and arc-shaped to conform to the outer circumferential surface of the motor housing Ma. The motor housing Ma is close to but spaced from the upper case surface21. The drive motor M is placed in such a manner that the lower portion of the motor housing Ma is held between the front and rear mount portions19and20. This allows the crankcase15to stably support the drive motor M. The motor housing Ma is made of metal. For example, the motor housing Ma is made of an aluminum alloy. One end of the motor housing Ma in the vehicle width direction is covered by a removable cover, and the rest of the motor housing Ma is formed as a one-piece component. That is, the outer circumferential surface of the motor housing Ma, which defines the radially outer boundary of the motor housing Ma, is continuous and seamless over its entirety. Thus, the motor housing Ma exhibits higher strength in the circumferential direction than in the vehicle width direction. The motor housing Ma, which is continuous over its entire circumference, has higher rigidity in the circumferential direction than in the vehicle width direction. Referring back toFIG.1, a transmission17is located behind the engine E. The transmission17includes an input shaft17a, an output shaft17b, and a plurality of gear pairs having different reduction ratios. The transmission17transmits power from the input shaft17ato the output shaft17bthrough one of the gear pairs. The transmission17selects a desired one of the gear pairs and performs speed change by the selected gear pair. The transmission17is, for example, a dog clutch transmission. ECU (electronic control unit) controls the engine E. Specifically, the ECU controls a throttle device, fuel injector, and igniter to control the engine E. A main clutch18is located between the engine E and the transmission17. The main clutch18is engaged and disengaged to connect and disconnect the output shaft35(FIG.6) of the engine E and the input shaft17aof the transmission17. The engagement and disengagement of the main clutch18can be controlled by the ECU. The output shaft of the drive motor M is connected to the input shaft17aof the transmission17via a gear. Thus, drive power produced by the drive motor M can be transmitted to the input shaft17a. The ECU can control the operation of the drive motor M. Thus, the ECU can control the drive power of the engine E and the drive power of the drive motor M separately. A swing arm22supporting the rear wheel3and extending in the front-rear direction is supported by the vehicle body frame4in such a manner as to be angularly movable. The rotational power of the output shaft17bof the transmission17is transmitted to the rear wheel3through an output transmission structure23(such as a chain or belt). FIG.5is a perspective view of the rear shock absorber7and its vicinity. InFIG.5, the vehicle body frame4and swing arm22are shown by double-dotted dashed lines.FIG.6is a side view of the rear shock absorber7and its vicinity. The rear shock absorber7includes an elastic component. In the present embodiment, the rear shock absorber7includes a coil spring7aas the elastic component. In the present embodiment, the coil spring7ais located on the radially outer side of the rear shock absorber7. The rear shock absorber7exhibits a damping force against an applied force and absorbs a shock applied to the swing arm22through the rear wheel3. The rear suspension14suspends the swing arm22on the vehicle body frame4via the rear shock absorber7. The motorcycle1includes a bridge structure24located behind the engine E and drive motor M and mounted on both the engine E and drive motor M to bridge the engine E and drive motor M. In the present embodiment, the bridge structure24is plate-shaped. The bridge structure24has the function of a bridge structure mounted on the engine E and drive motor M to bridge the engine E and drive motor M. Since the bridge structure24is mounted on the engine E and drive motor M to bridge the engine E and drive motor M, the engine E and drive motor M are connected so securely that the engine E and drive motor M are prevented from moving away from each other. In the present embodiment, the bridge structure24is formed as a one-piece component using a hard plastic. The bridge structure24includes a plate portion24alocated on the front of the bridge structure24, a rib24bprotruding rearward from the plate portion24a, and a connection portion24cconnected to the vehicle body frame4. The rib24bhas a portion extending in the vehicle width direction and a portion extending in the height direction, and these portions cross each other. Since the rib24bis located behind the plate portion24a, the strength of the bridge structure24in the compression direction can be increased, and the weight of the bridge structure24can be reduced. The connection portion24cprotrudes outward in the vehicle width direction and is connected to the vehicle body frame4at a point outward of the plate portion24ain the vehicle width direction. For example, the connection portion24cis provided with a hole extending inward from the outer end of the connection portion24cin the vehicle width direction. The inner circumferential surface of the connection portion24cwhich defines the hole is provided with internal threads. The portion of the vehicle body frame4that is connected to the connection portion24cis provided with a hole extending through the entire width of the vehicle body frame4in the vehicle width direction. A bolt25is inserted into both the hole of the connection portion24cand the hole of the vehicle body frame4. The external threads of the bolt25and the internal threads of the connection portion24care engaged, so that the connection portion24cand the vehicle body frame4are connected. Consequently, the bridge structure24and the vehicle body frame4are connected. In the present embodiment, the bridge structure24extends upward to a height above the connection portion24cconnected to the vehicle body frame4. In the present embodiment, the rear shock absorber7includes an upper mounting portion26connected to the vicinity of the upper end of the bridge structure24. Thus, the upper mounting portion26of the rear shock absorber7is located at a height above the connection portion24cat which the bridge structure24is connected to the vehicle body frame4. The connection portion24cat which the bridge structure24is connected to the vehicle body frame4is disposed on each side of the bridge structure24in the vehicle width direction, and the two opposing connection portions24care located at the same height in the up-down direction. Thus, the upper mounting portion26is located at a height above all of the connection portions24c. In the present embodiment, the upper mounting portion26of the rear shock absorber7is located at a height above that upper case surface21of the crankcase15on which the drive motor M is supported. In particular, in the present embodiment, the upper mounting portion26of the rear shock absorber7is located at a height above the drive shaft O1of the drive motor M as shown inFIG.6. Additionally, the upper mounting portion26of the rear shock absorber7is located at a height such that the upper mounting portion26as viewed in the front-rear direction overlaps the cylinder Ea of the engine E. The rear shock absorber7includes, in addition to the upper mounting portion26, a lower mounting portion27that is also connected to the bridge structure24. The upper mounting portion26connects the rear shock absorber7to the bridge structure24at an upper point. The lower mounting portion27connects the rear shock absorber7to the bridge structure24at a lower point. In the present embodiment, the upper mounting portion26is located at the upper end of the rear shock absorber7, and the lower mounting portion27is located at the lower end of the rear shock absorber7. The upper mounting portion26of the rear shock absorber7is connected to the drive motor M via the bridge structure24. The lower mounting portion27of the rear shock absorber7is connected to the swing arm22via a link28. The following will describe the link28. In the present embodiment, the link28includes a triangular link plate29and a rectangular link plate30longer in one direction than in the other direction. The triangular link plate29is located between the bridge structure24and the lower mounting portion27of the rear shock absorber7. The triangular link plate29is plate-shaped and includes three nodes31a,31b, and31clocated respectively at the three vertices of the triangle. In the present embodiment, the rectangular link plate30is located between the triangular link plate29and the swing arm22. The rectangular link plate includes a node31dat an end opposite to that at which the node31bis located. The triangular link plate29is pivotally connected to the bridge structure24by one of the three nodes31a,31b, and31c, in particular by the node31a. The triangular link plate29is pivotally connected to the lower mounting portion27of the rear shock absorber7by another of the three nodes31a,31b, and31c, in particular by the node31c. The triangular link plate29is pivotally connected to the rectangular link plate30by the other of the three nodes31a,31b, and31c, in particular by the node31b. The rectangular link plate30is pivotally connected to the swing arm22by one of the two opposite nodes31band31d, in particular by the node31d. In the present embodiment, the swing arm22has an end22asupporting the rear wheel3and an end22bopposite to the end22a, and the end22bis connected to the bridge structure24. The swing arm22is pivotally connected to the bridge structure24. Thus, the swing arm22supporting the rear wheel3and extending in the front-rear direction is supported by the vehicle body frame4in such a manner as to be angularly movable. In the motorcycle1configured as described above, in the event that a force is applied to the rear wheel3in the up-down direction, the up-down motion of the rear wheel3is transmitted to the rectangular link plate30through the swing arm22. Since the rectangular link plate30and the triangular link plate29are pivotally connected via the node31b, a motion of the rectangular link plate30induces a motion of the triangular link plate29. Since the triangular link plate29is connected to the lower mounting portion27of rear shock absorber7via the node31c, a motion of the triangular link plate29induces a motion of the lower mounting portion27. The motion transmitted to the lower mounting portion27of the rear shock absorber7can be absorbed by the rear shock absorber7exhibiting a damping force. In this manner, the force applied to the rear wheel3is absorbed by the rear shock absorber7. Since the bridge structure24is mounted on the engine E and drive motor M to bridge the engine E and drive motor M and since the upper mounting portion26is connected to the drive motor M via the bridge structure24, a load applied to the rear shock absorber7in the up-down direction is transmitted to the drive motor M through the bridge structure24and acts on the drive motor M in the circumferential direction. The drive motor M has higher rigidity in the circumferential direction than in the vehicle width direction. Thus, in the event that a load is applied to the rear shock absorber7in the up-down direction, the drive motor M receives the load in a direction in which the drive motor M has high rigidity. As such, the drive motor M exhibits high rigidity against loads applied to the drive motor M through the rear shock absorber7. In the present embodiment, the link28supports the rear shock absorber7at a location below the engine E in the absence of any load acting on the rear shock absorber7. In the present embodiment, the drive motor M is located above the rear of the engine E, and the upper mounting portion26of the rear shock absorber7is connected to the drive motor M via the bridge structure24. Thus, the upper mounting portion26of the rear shock absorber7is at a high location. As such, in the rear shock absorber7mounted on the motorcycle1, a sufficient height difference can be provided between the upper and lower mounting portions26and27. This allows the rear shock absorber7to be long enough to absorb strong shocks applied during travel. Additionally, the height from the ground to the lower end of the rear shock absorber7can be sufficiently large. This permits the rear shock absorber7to avoid contacting obstacles on the ground during travel. In particular, in the present embodiment, the bridge structure24extends upward to a height above the connection portion24cat which the bridge structure24is connected to the vehicle body frame4, and the upper mounting portion26of the rear shock absorber7is located at a height above the connection portion24c. Thus, the height of the location of the upper mounting portion26can be further increased. This allows for a further increase in the height difference between the upper and lower mounting portions26and27and hence a further increase in the length of the rear shock absorber7, permitting the rear shock absorber7to absorb very strong shocks applied during travel. Additionally, the height from the ground to the lower end of the rear shock absorber7can be further increased. As such, the rear shock absorber7can reliably avoid contacting obstacles on the ground during travel. In the present embodiment, the drive motor M is located above the rear of the engine E and secured to the engine E. Thus, the engine E and drive motor M secured to each other can be collectively secured to the vehicle body frame4, and the construction where the engine E and drive motor M are supported by the vehicle body frame4can easily be produced. In the present embodiment, the upper mounting portion26of the rear shock absorber7is connected via the bridge structure24to the drive motor M located above the rear of the engine E. Thus, the upper mounting portion26of the rear shock absorber7is mounted on the drive motor M having higher rigidity in the up-down direction than in the vehicle width direction. Since the upper mounting portion26of the rear shock absorber7is mounted on a component having high rigidity in the up-down direction, the upper mounting portion26itself need not have high rigidity. This allows for a weight reduction of the upper mounting portion26of the rear shock absorber7mounted on the drive motor M. The weight reduction of the upper mounting portion26of the rear shock absorber7leads to a weight reduction of the motorcycle1, resulting in improved fuel efficiency of the motorcycle1. This allows for a reduction in the operating cost of the motorcycle1. Additionally, the motorcycle1having a reduced weight is steerable effortlessly, maneuverable easily, and permits the brakes to work well, thus exhibiting improved travel performance. In the present embodiment, the drive motor M is located in a space lying behind the engine E and above the crankcase15. Thus, the drive motor M is located in a space bounded on the front by the engine E extending longitudinally in the up-down direction and bounded below by the crankcase15extending longitudinally in the front-rear direction. This arrangement is space-efficient. As such, the engine E and drive motor M can be arranged within a small space, and the size of the motorcycle1can be reduced. In the present embodiment, the bridge structure24is located behind the engine E and drive motor M and mounted on both the engine E and drive motor M to bridge the engine E and drive motor M. Further, the upper mounting portion26of the rear shock absorber7is connected to the drive motor M via the bridge structure24. Thus, the upper mounting portion26of the rear shock absorber7is indirectly mounted on the drive motor M with the bridge structure24interposed between the upper mounting portion26and the drive motor M. Since the upper mounting portion26of the rear shock absorber7is connected to the drive motor M via the bridge structure24, shocks directly applied to the drive motor M through the rear shock absorber7can be reduced, and the durability of the drive motor M can be increased. In the present embodiment, the bridge structure24is plate-shaped. Thus, in the event that a force applied in such a direction as to extend or contract the rear shock absorber7is transmitted to the bridge structure24, the force acts on the bridge structure24as a tensile or compressive force along the plane in which the bridge structure24extends. Thus, the bridge structure24can exhibit its strength against tensile and compressive loads in the in-plane direction and receive strong forces. In the present embodiment, the bridge structure24pivotally supports the swing arm22. Thus, loads acting on the swing arm22can be received by the bridge structure24. As such, loads acting on the engine E and drive motor M can be reduced, and the durability of the engine E and drive motor M can be increased. In the present embodiment, the lower mounting portion27of the rear shock absorber7is connected to the swing arm22via the link28. Thus, the nodes31dand31bare the points of effort, the node31ais the point of support, and the node31cis the point of load. Since the points of support, effort, and load are located between the rear shock absorber7and the swing arm22, the flexibility in designing the lower mounting portion27of the rear shock absorber7can be increased. In the present embodiment, the link28supports the rear shock absorber7at a location below the engine E. Thus, the rear shock absorber7can be sufficiently long in the up-down direction. As such, the rear shock absorber7can absorb strong shocks. The relationship among the locations of the rear shock absorber7, swing arm22, and link28is not limited to that in the above embodiment. The rear shock absorber7, swing arm22, and link28may be located in another relationship. For example, as shown inFIG.7A, the lower mounting portion27of the rear shock absorber7may be pivotally connected to the triangular link plate29, the triangular link plate29may be pivotally connected to the bridge structure24via a rectangular link plate30b, and the triangular link plate29may be pivotally connected to the swing arm22via another rectangular link plate30a. Alternatively, for example, the triangular link plate29may be located above the swing arm22as shown inFIG.7B. In this case, as shown inFIG.7B, the lower mounting portion27of the rear shock absorber7may be pivotally connected to the triangular link plate29located above the swing arm22, the triangular link plate29may be pivotally connected to the swing arm22via a rectangular link plate30c, and the triangular link plate29may be pivotally connected to the bridge structure24via a rectangular link plate30d. The rear shock absorber7, swing arm22, and link28may be located in any other relationship insofar as the lower mounting portion27of the rear shock absorber7is connected to the swing arm22via the link28and the upper mounting portion26of the rear shock absorber7is connected via the bridge structure24to the drive motor M located above the rear of the engine E. Second Embodiment Hereinafter, a motorcycle according to a second embodiment will be described. The features identical to those of the first embodiment will not be described again, and only the features distinguishing the second embodiment from the first embodiment will be described. In the motorcycle of the first embodiment described above, the power unit12including the engine E and the drive motor M located in a space lying behind the engine E and above the crankcase15is the supported structure supported by the vehicle body frame4, and the upper mounting portion26of the rear shock absorber7is indirectly connected to the drive motor M. The motorcycle of the second embodiment differs from that of the first embodiment in that a battery case enclosing a battery and the drive motor M located above the battery case serve as the supported structure supported by the vehicle body frame4and that the upper mounting portion26of the rear shock absorber7is indirectly connected to the drive motor M located above the battery case. FIG.8is a side view of a power unit12a, the rear shock absorber7, and their vicinity in a motorcycle1aaccording to the second embodiment. In the second embodiment, as shown inFIG.8, the power unit12adoes not include the engine E but consists of the drive motor M. That is, the motorcycle1aof the second embodiment is an electric motorcycle in which rotational drive power to be transmitted to the rear wheel3is produced only by the operation of the drive motor M. Thus, in the second embodiment, the drive motor M functions as the drive source that produces rotational drive power to be transmitted to the rear wheel3. Below the drive motor M there are: a battery32that stores electric power for operating the drive motor M and that supplies an electric current to the drive motor M; and a battery case33enclosing the battery32. The drive motor M and battery case33are aligned in the up-down direction. In the second embodiment, as in the first embodiment, the drive motor M exhibits relatively high rigidity. In the motorcycle1aof the second embodiment, the battery case33and the drive motor M serve as the supported structure supported by the vehicle body frame4. In the motorcycle1aconfigured as described above, the first element includes the battery32and the battery case33enclosing the battery32. The second element includes the drive motor M having the drive shaft. The battery32and battery case33serving as the first element have the function of supplying an electric current to the drive motor M (first function). The drive motor M serving as the second element has the function of producing rotational drive power to be transmitted to the rear wheel3when in operation (second function). In the second embodiment, the upper mounting portion26of the rear shock absorber7is connected via the bridge structure24to the drive motor M located above the battery case33. Thus, the upper mounting portion26of the rear shock absorber7is mounted on the drive motor M having high rigidity in the up-down direction. Since the upper mounting portion26of the rear shock absorber7is mounted on a component having high rigidity in the up-down direction, the upper mounting portion26itself need not have high rigidity. This allows for a weight reduction of the upper mounting portion26of the rear shock absorber7mounted on the drive motor M. The weight reduction of the upper mounting portion26of the rear shock absorber7leads to a weight reduction of the motorcycle1a, resulting in improved fuel efficiency of the motorcycle1a. Additionally, the weight reduction of the motorcycle1aimproves the travel performance of the motorcycle1a. Third Embodiment Hereinafter, a motorcycle according to a third embodiment will be described. The features identical to those of the first and second embodiments will not be described again, and only the features distinguishing the third embodiment from the other embodiments will be described. In the configuration of the first embodiment described above, the power unit12including the engine E and the drive motor M located in a space lying behind the engine E and above the crankcase15is the supported structure supported by the vehicle body frame4, and the upper mounting portion26of the rear shock absorber7is indirectly connected to the drive motor M. In the configuration of the second embodiment described above, a battery case enclosing a battery and the drive motor M located above the battery case serve as the supported structure supported by the vehicle body frame4, and the upper mounting portion26of the rear shock absorber7is indirectly connected to the drive motor M. The third embodiment differs from the first and second embodiments in that the drive motor M and a battery case enclosing a battery and located above the drive motor M serve as the supported structure supported by the vehicle body frame4and that the upper mounting portion26of the rear shock absorber7is indirectly connected to the battery case. FIG.9is a side view of a power unit12b, the rear shock absorber7, and their vicinity in a motorcycle1baccording to the third embodiment. In the third embodiment, as shown inFIG.9, the power unit12bdoes not include the engine E but consists of the drive motor M, like the power unit12aof the second embodiment. That is, the motorcycle1bof the third embodiment is an electric motorcycle in which rotational drive power to be transmitted to the rear wheel3is produced only by the operation of the drive motor M. Thus, in the third embodiment, the drive motor M functions as the drive source that produces rotational drive power to be transmitted to the rear wheel3. Above the drive motor M there are: a battery32that stores electric power for operating the drive motor M and that supplies an electric current to the drive motor M; and a battery case33enclosing the battery32. The drive motor M and battery case32are aligned in the up-down direction. In the third embodiment, the relationship between the locations of the drive motor M and battery case33is reverse to that in the second embodiment; namely, the battery case33is at an upper location, and the drive motor M is at a lower location. In the third embodiment, the battery case33includes an internal framework, and this framework is made of metal and has higher strength in the up-down direction than in the vehicle width direction. Thus, the battery case33has high rigidity in the up-down direction. In the motorcycle1bconfigured as described above, the first element includes the drive motor M having the drive shaft. The second element includes the battery32and the battery case33enclosing the battery. The drive motor M serving as the first element has the function of producing rotational drive power to be transmitted to the rear wheel3when in operation (first function). The battery32and battery case33serving as the second element have the function of supplying an electric current to the drive motor M (second function). In the third embodiment, the upper mounting portion26of the rear shock absorber7is connected via the bridge structure24to the battery case33located above the drive motor M. Thus, the upper mounting portion26of the rear shock absorber7is mounted on the battery case33having high rigidity in the up-down direction. Since the upper mounting portion26of the rear shock absorber7is mounted on a component having high rigidity in the up-down direction, the upper mounting portion26itself need not have high rigidity. This allows for a weight reduction of the upper mounting portion26of the rear shock absorber7mounted on the drive motor M. The weight reduction of the upper mounting portion26of the rear shock absorber7leads to a weight reduction of the motorcycle1b, resulting in improved fuel efficiency of the motorcycle1b. Additionally, the weight reduction of the motorcycle1bimproves the travel performance of the motorcycle1b. In the configuration of the first embodiment described above, the first element is the engine E, and the second element is the drive motor M. In the configuration of the second embodiment described above, the first element is the battery case, and the second element is the drive motor M. In the configuration of the third embodiment described above, the first element is the drive motor M, and the second element is the battery case. However, the present disclosure is not limited to the configurations of the above embodiments. The first and second elements of the supported structure supported by the vehicle body frame4may be different from those in the above embodiments. The combination of the first and second elements may be different from those in the above embodiments insofar as at least one of the first element or the second element functions as a drive source that produces rotational drive power to be transmitted to the rear wheel3. For example, an oil tank storing a lubricant oil may be the first element, and the drive motor M may be the second element. The number of the elements of the supported structure supported by the vehicle body frame4need not be two, and the supported structure may include one or more elements other than the first and second elements. Three or more elements may be supported by the vehicle body frame4and constitute the supported structure. In this case, it is sufficient that two of the three or more elements be the first and second elements aligned in the up-down direction. In the configurations of the embodiments described above, the motorcycles include two shock absorbers, i.e., the front and rear shock absorbers. However, the motorcycle according to the present disclosure is not limited to the configurations of the above embodiments. The motorcycle need not include the front shock absorber and may include only one shock absorber. In this case, it is sufficient that the upper mounting portion of the shock absorber be connected to the upper one of the first and second elements aligned in the up-down direction, in particular to the second element located above the first element. The motorcycle may include three or more shock absorbers. It is sufficient that the upper mounting portion of any one of the shock absorbers be connected to the upper one of the first and second elements aligned in the up-down direction, in particular to the second element located above the first element. In the embodiments described above, the straddle vehicle is embodied as a motorcycle. However, the straddle vehicle of the present disclosure is not limited to motorcycles as described in the above embodiments. The straddle vehicle may be a motor tricycle or any other type of straddle vehicle. | 34,505 |
11858586 | DETAILED DESCRIPTION Referring now to the drawings submitted herewith, wherein various elements depicted therein are not necessarily drawn to scale and wherein through the views and figures like elements are referenced with identical reference numerals, there is illustrated a motorcycle conversion kit100constructed according to the principles of the present invention. An embodiment of the present invention is discussed herein with reference to the figures submitted herewith. Those skilled in the art will understand that the detailed description herein with respect to these figures is for explanatory purposes and that it is contemplated within the scope of the present invention that alternative embodiments are plausible. By way of example but not by way of limitation, those having skill in the art in light of the present teachings of the present invention will recognize a plurality of alternate and suitable approaches dependent upon the needs of the particular application to implement the functionality of any given detail described herein, beyond that of the particular implementation choices in the embodiment described herein. Various modifications and embodiments are within the scope of the present invention. It is to be further understood that the present invention is not limited to the particular methodology, materials, uses and applications described herein, as these may vary. Furthermore, it is also to be understood that the terminology used herein is used for the purpose of describing particular embodiments only, and is not intended to limit the scope of the present invention. It must be noted that as used herein and in the claims, the singular forms “a”, “an” and “the” include the plural reference unless the context clearly dictates otherwise. Thus, for example, a reference to “an element” is a reference to one or more elements and includes equivalents thereof known to those skilled in the art. All conjunctions used are to be understood in the most inclusive sense possible. Thus, the word “or” should be understood as having the definition of a logical “or” rather than that of a logical “exclusive or” unless the context clearly necessitates otherwise. Structures described herein are to be understood also to refer to functional equivalents of such structures. Language that may be construed to express approximation should be so understood unless the context clearly dictates otherwise. References to “one embodiment”, “an embodiment”, “exemplary embodiments”, and the like may indicate that the embodiment(s) of the invention so described may include a particular feature, structure or characteristic, but not every embodiment necessarily includes the particular feature, structure or characteristic. Referring in particular to Figures submitted as a part hereof, the motorcycle conversion kit100is operable to change a three-wheeled motorcycle between a first mode illustrated herein inFIG.9and a second mode illustrated herein inFIG.10. In the first mode inFIG.9the motorcycle99is in its original intended configuration and is suitable for traversing across roads and similar surfaces. InFIG.10, the motorcycle99has been converted through installation of the motorcycle conversion kit100to traverse across snowy terrains. Ensuing herein is a discussion of the elements of the motorcycle conversion kit100that is installed on the motorcycle99to convert the motorcycle99from a first mode to a second mode wherein the second mode provides the ability to traverse the motorcycle99across a snowy terrain. The motorcycle conversion kit100includes two front ski assemblies10that are installed on the motorcycle99to replace the conventional wheels98. The front ski assemblies10include a vertical support member11and ski member12. The ski member12is horizontal in orientation, elongated in shape and is manufactured from a durable material such as but not limited to plastic. The ski member12has a sufficient surface area in order to support the front of the motorcycle99as it traverses across a snowy terrain. It should be understood within the scope of the present invention that the ski member12could be manufactured in alternate lengths and widths in order to achieve the desired objective herein. The vertical support member11is operably coupled to the ski member12and extends upward therefrom. The vertical support member11is operably secured to the ski assembly attachment bracket15. The ski assembly attachment brackets15are illustrated herein inFIG.2. The ski assembly attachment bracket15includes a body16that is planar in manner and manufactured from a durable rigid material such as but not limited to metal. The body16includes a first end17and a second end18. The first end17has an arcuate shaped perimeter edge19wherein the first end17is operably coupled to rotor assembly20of the motorcycle99. The ski assembly attachment bracket15includes a bearing cap aperture21surrounds by three lug apertures22. The bearing cap aperture is configured to have a portion of the bearing cap of the rotor assembly journal thereinto while the lug apertures22are configured to receive lug bolt (not illustrated herein) therethrough in order to secure the ski assembly attachment bracket15to the rotor assembly20. It should be understood that the layout pattern of the bearing cap aperture21with the lug apertures22are configured for a particular motorcycle99and that alternate arrangements and quantities are contemplated within the scope of the present invention. The ski assembly attachment bracket15includes four fasteners23that are operable to secure the ski assembly attachment bracket15to the vertical support member11. It should be understood within the scope of the present invention that the ski assembly attachment bracket15could employ more or less than four fasteners23. The front ski assemblies10further include rotor movement inhibitors25,26. The rotor movement inhibitors25,26are placed on opposing ends of a brake caliper present on the rotor27. The motorcycle99includes a conventional rotor27and brake calipers (not illustrated herein) so as to properly function and provide braking when the motorcycle99is in its first mode of operation. Ensuing installation of the motorcycle conversion kit100, the motorcycle99must be configured so as to inhibit the rotor27from spinning as it would in the first mode of operation of the motorcycle99. The rotor movement inhibitors25,26are positioned on opposing sides of a conventional caliper that is operably coupled to the rotor27. Fasteners28,29are used to secure the rotor movement inhibitors25,26to the rotor27and as such will inhibit the rotor27from being able to rotationally move. In a preferred embodiment the rotor movement inhibitors25,26are annular in shape and manufactured from a vulcanized rubber. While the rotor movement inhibitors25,26are illustrated herein as having different diameters, it should be understood that is to accommodate a particular type of motorcycle99and it is contemplated within the scope of the present invention that the rotor movement inhibitors25,26could be provided in alternate sizes and shapes. The snow belt assembly30is operably coupled to the motorcycle99utilizing swing arm assembly35. The swing arm assembly35includes a front portion37and a rear portion39illustrated herein inFIGS.6and5respectively. The front portion37of the swing arm assembly35an arcuate support member40operably coupled intermediate a first support arm42and a second support arm44. The first support arm42and second support arm44are configured with ends45,46to movably coupled to the frame of the motorcycle99forward of the snow belt assembly30. A securing rod48is operably coupled intermediate the ends45,46and functions to secure the swing arm assembly35to the frame of the motorcycle99. The swing arm assembly35includes rear arm members50,51. The rear arm members50,51are secured to the front portion37at joints52,53utilizing techniques such as but not limited to welding. The rear arm members50,51are manufactured from square tubular metal and extend rearward from the front portion37having a void therebetween. The rear arm members50,51includes ends58,59that are configured to movably secure snow belt drive wheel support rod60. The snow belt drive wheel support rod60is movably secured within apertures62,63so as to provide movement thereof in order to adjust tension of the drive chain95. The positionable securing of the snow belt drive wheel support rod60is provided by the mounting blocks66,67that are positioned inside recesses68,69. The mounting blocks66,67are mateably shaped to be disposed within the recesses68,69and are configured to provide a backward-forwards movement of the snow belt drive wheel support rod60in order to place the appropriate amount of tension on the drive chain95. The drive chain95is operably coupled to drive chain sprocket80. The drive chain sprocket80is operably coupled to the motorcycle99and to the snow belt drive wheel90. The drive chain sprocket80includes a plate81wherein the plate81includes teeth82circumferentially disposed thereon that are operable to couple to the drive chain95. The drive chain sprocket80has integrally formed therewith a hub83wherein the hub83is configured to extend outward from the drive chain sprocket80being perpendicular thereto. The hub83includes spline arrangement84wherein the spline arrangement is configured to operably couple to a drive shaft of the motorcycle99. The hub83is configured so as to provide a necessary offset in order to provide proper positioning of the drive chain95so as to couple with the snow belt drive wheel90in order to provide operation thereof while avoiding interference from the frame of the motorcycle99. It should be understood within the scope of the present invention that the drive chain sprocket80could be formed in alternate shapes and sizes in order to accommodate different types of motorcycles. The motorcycle conversion kit100further includes a snow belt assembly30. The snow belt assembly is configured to traverse the motorcycle99across a snow terrain. The snow belt assembly includes snow belt31that is rotatably coupled to a support frame32. The snow belt31is a conventional snow belt having projections33configured to penetrate and engage snow in order to propel the motorcycle99. Furthermore the projections33are spaced so as to operably coupled with the snow belt drive wheel90in particular the grooves91thereof wherein the rotational movement of the snow belt drive wheel90translates to rotational movement of the snow belt31. The snow belt assembly30is operably coupled to the frame of the motorcycle99utilizing suitable durable mechanical techniques. The snow belt assembly30is positioned so as to have the snow belt drive wheel90superposed the snow belt31. The snow belt drive wheel90is rotatably moved by drive chain95. The swing arm assembly35includes snow belt inhibitors77,78. The snow belt inhibitors77,78are L-shaped being manufactured from metal or other suitable material. The snow belt inhibitors77,78are secured to rear arm members50,51and positioned to inhibit the snow belt31from bouncing upward into the shock assembly of the motorcycle99during use of the motorcycle99in its second mode. The motorcycle conversion kit100further includes a fuel cell bracket assembly105. The fuel cell bracket assembly105is manufactured from a suitable material such as but not limited to aluminum. The fuel cell bracket assembly105is operably coupled to the frame of the motorcycle99utilizing suitable mechanical techniques. The fuel cell bracket assembly105includes a first plate arm member106and a second plate arm member107. The first plate arm member106and second plate arm member107are secured to opposing sides of the frame of the motorcycle99. Secured to the first plate arm member106and second plate arm member107being intermediate thereto is compartment108. The compartment108is manufactured from integrally secured support members109forming an interior volume110configured to receive a portable fuel cell therein. It should be understood within the scope of the present invention that the support members109could be provided in alternate quantities and configurations in order to provide a compartment108. Furthermore, it should be understood within the scope of the present invention that the compartment108could be provided in alternate shapes and sizes. The preferred embodiment of the motorcycle conversion kit100has been illustrated and discussed herein but it should be understood within the scope of the present invention that the motorcycle conversion kit100could include alternate elements in addition to the elements illustrated and discussed herein wherein the combination thereof is operable to transition the motorcycle99between a first mode and a second mode so as to traverse across different surfaces. In the preceding detailed description, reference has been made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments, and certain variants thereof, have been described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that other suitable embodiments may be utilized and that logical changes may be made without departing from the spirit or scope of the invention. The description may omit certain information known to those skilled in the art. The preceding detailed description is, therefore, not intended to be limited to the specific forms set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the appended claims. | 13,798 |
11858587 | DETAILED DESCRIPTION OF EMBODIMENTS Selected embodiments will now be explained with reference to the drawings. It will be apparent to those skilled in the human-powered vehicle field (e.g., the bicycle field) from this disclosure that the following descriptions of the embodiments are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. Referring initially toFIG.1, a bicycle pedal12for a human-powered vehicle is illustrated in accordance with a first embodiment. The bicycle pedal12comprises a pedal shaft14and a pedal body16. The pedal shaft14has a rotational center axis AR. The pedal body16is rotatably supported by the pedal shaft14around the rotational center axis AR. The pedal shaft14also connects the pedal body16to the outer end18aof a bicycle crank18such that the pedal body16can rotate with respect to the bicycle crank18around the rotational center axis AR of the pedal shaft14. The inner end18bof the bicycle crank18can be attached to a drive train of the human-powered vehicle, such that use of the bicycle pedal12rotates the bicycle crank and causes the rotation of one or more wheel of the human-powered vehicle. FIGS.5,6and10show the pedal shaft14in more detail. As illustrated, the pedal shaft14is an elongated rod which has a longitudinal length extending along a rotational center axis AR. The pedal shaft14comprises a first end portion20, a second end portion22, and a contact portion24. The pedal shaft can further comprise an exposed portion26. Each of the first end portion20, the second end portion22, the contact portion24, and the exposed portion26is located along the rotational center axis AR. The pedal shaft14can be formed, for example, as a single part made of a metal material such as carbon steel or chrome molybdenum steel. The first end portion20is configured to be attached to the bicycle crank18. More specifically, the first end portion20is configured to be attached to the outer end18aof the bicycle crank18. For attachment to the bicycle crank18, the first end portion20can include at least one of an outer thread20a, a crank attachment bore20b, and a lip20c. Here, the outer thread20aencircles the perimeter of the first end portion20and can be threaded into a corresponding aperture at the outer end18aof the bicycle crank18until the lip20cpresses against an outer surface of the bicycle crank18. A screw, nut and bolt, or other attachment device can then be screwed into the crank attachment bore20bfrom the opposite side of the outer end18a. In this way, the pedal shaft14can rotatably support the pedal body16relative to the bicycle crank18, with the pedal body16rotating around the rotational center axis AR of the pedal shaft14. The second end portion22is located on an opposite side of the pedal shaft14as the first end portion20in an axial direction with respect to the rotational center axis AR. As described in more detail below, the second end portion22is configured to slide into the pedal body16so as to rotatably support the pedal body16on the bicycle crank18. As seen inFIG.5, the diameter of the pedal shaft14remains constant or substantially constant proximal to the second end portion22to facilitate entry of the second end portion22into the pedal body16and to enable the pedal body16to rotate freely around the second end portion22. The contact portion24is located between the first end portion20and the second end portion22in the axial direction. More specifically, the contact portion24is located between the first end portion20and the second end portion22in the axial direction with respect to the rotational center axis AR of the pedal shaft14. The contact portion24is also located between the first end portion20and the exposed portion26in the axial direction with respect to the rotational center axis AR of the pedal shaft14. As described in more detail below, the contact portion24of the pedal shaft14contacts a portion of the pedal body16when a load is applied by a rider, thus absorbing at least part of the rider's load. However, the contact portion24does not contact the same portion of the pedal body16under a no load condition. Here, the contact portion24is located proximal to the first end portion20in comparison to the second end portion22in the direction of the rotational center axis AR. The contact portion24of the pedal shaft14can be provided, for example, near the lip20c. In this way, the contact portion24can be located proximal to the outer end18aof the bicycle crank18when the pedal shaft14is attached to the bicycle crank18. This enables the rider's load to be distributed near the bicycle crank18, thus decreasing the amount of vertical displacement of the pedal body16caused by the rider's load. The exposed portion26of the pedal shaft14is located between the contact portion24and the second end portion22in the axial direction with respect to the rotational center axis AR of the pedal shaft14. As described in more detail below, the exposed portion26is exposed outside of the pedal body16between the contact portion24and the second end portion22. By exposing the exposed portion26in this manner, the pedal shaft14is able to flex at the exposed portion26under the rider's load without rubbing against an inner surface of the pedal body16, while at the same time enabling transfer of at least part of the rider's load to the contact portion24. The outer contour of the pedal shaft14at the exposed portion26can be formed in a variety of ways. Here, as seen inFIG.5, the diameter26aof the exposed portion26generally tapers inwardly to decrease from the contact portion24toward the second end portion22. The diameter of the pedal shaft14then remains generally constant along most of the longitudinal length of the pedal shaft14located within the pedal body16between the exposed portion26and the second end portion22. Those of ordinary skill in the art will recognize from this disclosure that the pedal shaft14can also have an uneven taper or segments which increase or decrease in diameter between contact portion24and the second end portion22. As seen inFIG.5, the contact portion24has a first outermost diameter24athat is larger than a second outermost diameter22aof the second end portion22. As used herein, the “outermost diameter” refers to the largest diameter of a respective portion of the pedal shaft14. By forming the first outermost diameter24aof the contact portion24to be larger than the second outermost diameter22aof the second end portion22as shown, a thicker section of the pedal shaft14at the contact portion24is configured to receive at least part of the rider's load applied nearer to a thinner section of the pedal shaft14at the second end portion22. As seen inFIG.5, the diameter of the pedal shaft14also protrudes outwardly from the first outermost diameter24abetween the contact portion24and the first end portion20, such that the outermost diameter at the first end portion20creates the lip20cfor contact with the bicycle crank18. The pedal shaft14generally has a larger diameter at the first end portion20than at the second end portion22. Since a large load is applied to the large diameter portion of the pedal shaft14, it is easy to obtain the strength of the pedal shaft14against the load. As seen inFIGS.7to10, the pedal body16can comprise a body part30and a load receiving part32. Alternatively, or in combination, the pedal body16can comprise the body part30, at least one resin tread part34, and at least one threaded fastener36. Here, the body part30and the load receiving part32are shown as separate parts, but in an alternative embodiment the body part30and the load receiving part32can be formed together as a single part. The at least one tread part34is formed separately from the body part30and is attached to the body part30by the at least one threaded fastener36, as described in more detail below. In the embodiments described below, it is assumed that the tread part is made of resin. The tread part is mainly described as a resin tread part. The resin tread part makes the pedal lighter and restricts a fastener from loosening, as will be described later. However, the material of the tread part is not limited to resin. At least when the effect of restricting the fastener from loosening is not expected, the tread part may be made of a material other than resin. Therefore, the tread part is also given the same reference numeral as the resin tread part. As seen inFIGS.7to9, the body part30includes a center portion30awhich extends along the rotational center axis AR of the pedal shaft14from a crank end side30bto a free end side30c, a first side portion30dwhich extends radially outward from one side of the center portion30awith respect to the rotational center axis AR, and a second side portion30ewhich extends radially outward from the opposite side of the center portion30awith respect to the rotational center axis AR. For example, in the case where the body part30is plate-shaped, the first side portion30dand the second side portion30eare portions in the lateral direction around the rotation axis AR of the body part30. The body part30can include a first side30fand a second side30gthat is on an opposite side of the first side30fwith respect to the body part30. For example, in the case where the body part30is plate-shaped, the first side30fand the second side30gcorrespond to the front side and the back side of the body part30. The first side30fand the second side30gface each other in the thickness direction of the body part30. As will be described later, the resin tread part34A is attached to the first side30f, and the resin tread part34B is attached to the second side30g. An axis parallel to the thickness direction, an axis parallel to the lateral direction, and the rotational center axis AR are orthogonal to each other. The center portion30a, the first side portion30d, and the second side portion30ecan extend between the first side30f(e.g., the “top” side inFIGS.7to9) and the second side30g(e.g., the “bottom” side inFIGS.7to9). The first side portion30dand the second side portion30ecan further include one or more aperture30h, which can be strategically placed and decrease the overall weight and material cost of the body part30. As seen for example inFIG.8, a plurality of apertures30hcan cause each of the first side portion30dand the second side portion30eto have an outer perimeter section30iconnected to the center portion30aby one or more connecting section30j. The body part30is rotatably supported by the pedal shaft14. The body part30receives the pedal shaft14. The body part30can include a pedal shaft receiving bore40configured to receive at least the second end portion22of the pedal shaft14. More specifically, the center portion30aof the body part30can include the pedal shaft receiving bore40configured to receive at least the second end portion22of the pedal shaft14. As seen inFIGS.11to13, the pedal shaft receiving bore40can include an entrance opening40aand an exit opening40b. The entrance opening40ais offset from the crank end side30bof the body part30by a distance D1to create a first gap42, while the exit opening40bis offset from the free end side30cof the body part30by a distance D2to create a second gap44. The entrance opening40acan receive at least one of a sliding bearing46, a first O-ring48, a second O-ring50, and the second end portion22of the pedal shaft14. The exit opening40bcan receive at least one of a sliding bearing46, an end washer52, and an end cap54. As seen inFIG.5, the body part30is rotatably supported by at least one sliding bearing46disposed on at least the second end portion22of the pedal shaft14. Here, the at least one sliding bearing46includes a first sliding bearing46A disposed on the second end portion22of the pedal shaft14and a second sliding bearing46B disposed between the first sliding bearing46A and the contact portion24. The load receiving part32is more effective in a case where the pedal shaft14is supported by at least two bearings such as the first sliding bearing46A and the second sliding bearing46B. This is because in a case where there is one bearing (e.g., the first sliding bearing46A) near the second end portion22of the pedal shaft14, the axial length of the pedal shaft14from the first end portion20to a portion where the bearing is mounted on the pedal shaft14can be sufficiently long. Here, the bearing (e.g., the first sliding bearing46A) is located at the small diameter portion of the pedal shaft14. Then, by gradually reducing the diameter of the pedal shaft14from the first end portion20to the portion where the bearing (e.g., the first sliding bearing46A) is mounted, the concentration of stress on the pedal shaft14can be easily suppressed. Especially in a case where the two bearings are separated from each other, the length of the reduced diameter portion becomes shorter and it becomes difficult to suppress the stress concentration on the pedal shaft14. To construct the pedal body16as shown inFIG.5, the exit opening40breceives the first sliding bearing46A, then the end washer52, and then the end cap54. Similarly, the entrance opening40areceives the second sliding bearing46B, then the first O-ring48, then the second O-ring50, and then the second end portion22of the pedal shaft14(e.g., during or after attachment of the load receiving part32as seen inFIG.9). When constructed as shown inFIG.5, the first sliding bearing46A and the second sliding bearing46B enable smooth rotation of the pedal body16around the pedal shaft14with respect to the rotational center axis AR. At the same time, the load receiving part32, the entrance first O-ring48, the backup second O-ring50, the end washer52, and the end cap54create the appropriate spacing at the entrance opening40aand the exit opening40b. The first O-ring48, the second O-ring50, the end washer52, and the end cap54further act to restrict unwanted dust and debris from entering the pedal shaft receiving bore40and interfering with rotation of the pedal body16around the pedal shaft14. The at least one sliding bearing46can be located in the central portion of the pedal shaft14in the axial direction. In this embodiment, the second sliding bearing46B is located in the central portion of the pedal shaft14in the axial direction of the pedal shaft14. Alternatively, for example, at least one bearing (e.g., the second sliding bearing46B) can be located in an area A1(see,FIGS.9and10) that is ⅖ to ⅗ of the axial length of the pedal shaft14from the outer surface of the lip20. Generally, the pedal shaft diameter of the bearing portion is small. Since the pedal shaft diameter from the sliding bearing46B to the second end portion22can be reduced, it is easy to reduce the thickness of the bicycle pedal12. The first O-ring48can be made, for example, with polyoxymethylene (POM) material, and can control the space between the pedal shaft14and the pedal body16. The second O-ring50can be made, for example, with acrylonitrile-butadiene rubber (NBR), and can decrease friction between the pedal shaft14and the pedal body16. The end washer52can be made, for example, with POM material, and can further decrease friction and create spacing at the tip of the second end portion22of the pedal shaft14. The end cap54can be made of metal, and can include threads which mate with corresponding threads on an inner surface of the exit opening40bto seal off the exit opening40band restrict unwanted dust and debris from entering the pedal shaft receiving bore40. FIGS.9and10show the load receiving part32in detail. As shown, the load receiving part32can include a load receiving contact portion60that contacts the contact portion24of the pedal shaft14upon a load being applied to the pedal body16from the rider. The load receiving part32can further include a support portion62that supports the load receiving contact portion60. The support portion62can be attached to the body part30. More specifically, the support portion62can attach the load receiving contact portion60to the body part30and thereafter support the load receiving contact portion60when the rider's load is applied. The load receiving part32can also include at least one fastener64. The support portion62of the load receiving part32can be attached to the body part30by the at least one fastener64. The load receiving part32can be attached to the body part30by the support portion62and the at least one support fastener64, for example, by inserting the load receiving contact portion60into the first gap42at the crank end side30bof the body part30and placing the support fasteners64through fastening apertures62aof the support portion62to attach the support portion62to the body part30at the crank end side30b. Once attached, the load receiving contact portion60is held in place between the body part30and the support portion62by the support fasteners64. As seen inFIG.10, the load receiving part32can include a pedal shaft receiving aperture60athat encircles the contact portion24of the pedal shaft14. More specifically, the load receiving contact portion60can include the pedal shaft receiving aperture60a. The load receiving contact portion60can also include an entrance aperture60band two side walls60cwhich create an exposing aperture60din a direction perpendicular to the rotational center axis AR of the pedal shaft14. The pedal shaft receiving aperture60aencircles the contact portion24of the pedal shaft14around the rotational center axis AR when the pedal shaft14is fully inserted into the body part30. The entrance aperture60bencircles the pedal shaft14around the rotational center axis AR between the exposed portion26and the second end portion22when the pedal shaft14is fully inserted into the body part30. As seen inFIG.5, the entrance aperture60bcan also protrude into the entrance opening40aof the pedal shaft receiving bore40when the load receiving part32is attached to the body part30. The side walls60calign with and abut corresponding side walls42aof the first gap42at the crank end side30bof the body part30when the load receiving part32is attached to the body part30. The side walls60ccan further include one or more mating feature60econfigured to mate with a corresponding one or more mating feature42bon the corresponding side wall42aof the first gap42. Here, the mating feature60eis one or more indentation extending longitudinally along each side wall60cbetween the pedal shaft receiving aperture60aand the entrance aperture60b, and the corresponding mating feature42bis a protrusion extending longitudinally along each side wall42aand configured to protrude into the mating feature60e. The indent can be a slit through the side wall60c. Alternatively, the mating feature60ecan include a protrusion, and the corresponding mating feature42bcan include an indentation. When fully installed as shown inFIGS.1to3, the exposing aperture60din the load receiving part60forms a space which exposes the exposed portion26of the pedal shaft14. Here, the exposed portion26of the pedal shaft14is exposed on both the first side30fand the second side30gof the body part30. An axial position of the part that receives the pedal shaft14of the load receiving part32is, for example, near the lip20c. In this embodiment, the part that receives the pedal shaft14of the load receiving part32is the pedal shaft receiving aperture60a. For example, the axial position of the pedal shaft receiving aperture60ais from the outer side surface of the lip20cto ¼ of the axial length as seen inFIG.9. The axial position of the pedal shaft receiving aperture60acan be represented by a center position of the axial length of the pedal shaft receiving aperture60a. As seen inFIG.10, the contact portion24of the pedal shaft14can be provided, for example, in an area A2(see,FIGS.9and10) from the outer side surface of the lip20cto ¼ of the axial length of the pedal shaft12. The axial position of the contact portion24can be represented by a center position of the axial length of the contact portion24of the pedal shaft14. As seen inFIG.10, the axial length of the pedal shaft12is from the outer side surface of the lip20cin the axial direction to the tip of the second end portion22. That is, the axial length of the pedal shaft12is the length of the pedal shaft12excluding the outer thread20a. As seen inFIG.3, the load receiving part32is spaced axially from the entrance opening40aof the pedal shaft receiving bore40along the rotational center axis AR of the pedal shaft14. More specifically, the inner surface of the pedal shaft receiving aperture60aof the load receiving part32which makes contact with the contact portion24of the pedal shaft14under a rider's load. The load receiving part32is spaced axially by a distance D3from the entrance opening40aof the pedal shaft receiving bore40along the rotational center axis AR of the pedal shaft14. InFIGS.2and3, this axial spacing distance D3is shown across the exposing aperture60dwhich exposes the exposed portion26. In this way, the pedal shaft14has an exposed portion26that is disposed outside of the pedal body16between the load receiving part32and the entrance opening40aof the pedal shaft receiving bore40. By creating the exposing aperture60dwith the exposed portion26of the pedal shaft14, the rider's load can be distributed away from the first sliding bearing46A inside the pedal shaft receiving bore40, and can instead be focused at the location where the contact portion24of the pedal shaft14contacts the load receiving part32. For example, the load applied to the first sliding bearing46A is larger than the load applied to the second sliding bearing46B until the contact position24of the pedal shaft14contacts the load receiving part32. However, when the contact position24of the pedal shaft14contacts the load receiving part32, the load applied to the load receiving part32and the second sliding bearing46B becomes larger than the load applied to the first sliding bearing46A. FIG.5shows a no load condition in which a rider is not pressing downwardly on the pedal body16. Here, the load receiving part32is located at a position corresponding to the contact portion24of the pedal shaft14along the rotational center axis AR. More specifically, the inner surface of the pedal shaft receiving aperture60aof the load receiving part32is located at a position corresponding to the contact portion24of the pedal shaft14along the rotational center axis AR. In this configuration, the load receiving part32is configured to receive a load from the contact portion24of the pedal shaft14. The load receiving part32is configured to receive the load when a rider presses downwardly on the pedal body16. Here, the load receiving part32is at least partly spaced from the contact portion24under the no load condition. More specifically, the inner surface of the pedal shaft receiving aperture60aof the load receiving part32is at least partly spaced from the contact portion24under the no load condition. The load receiving part32is at least partly spaced from the contact portion24by a distance D4under the no load condition. For example, the distance D4can range from 0.2 mm to 0.8 mm. More suitably, the distance D4can range from 0.3 mm to 0.6 mm. The distance D4is taken in a direction perpendicular to the rotational center axis AR. The distance D4can also exist under a predetermined load condition in which a load applied to the pedal body16does not exceed a predetermined value. When the distance D4exists under the predetermined load condition, a first portion and a second portion support the pedal body16on the bicycle crank18. The first portion is located at the first end portion20on the pedal shaft14. The second portion is located within the pedal shaft receiving bore40proximal to the second end portion22on the pedal shaft14. The predetermined load is a load in a case where the load receiving part32is not in contact with the contact portion24of the pedal shaft14. For the predetermined load, the first portion receives a greater load than the second portion. FIG.6shows a load being applied to the pedal body16by the rider. Here, the load receiving part32contacts the contact portion24upon a load applied to the pedal body16from a rider. At the location L1, the load receiving part32at the inner surface of the pedal shaft receiving aperture60acontacts the contact portion24upon the load applied to the pedal body16from the rider. In doing so, the load receiving part32absorbs at least a portion of a force F applied to the pedal body in a direction perpendicular to the rotational center axis of the pedal shaft14. For example, the direction perpendicular to the rotational center axis is a downward direction inFIG.6. The force F can be due to a rider stepping onto the pedal body16. For example, the force F can be greater than a pedaling force that is applied by the rider sitting on a saddle. For example, the force F can be the pedaling force that is applied by a rider standing from the saddle. The pedaling force applied by the rider sitting on the saddle can be less than the predetermined value under the predetermined load condition. That is, the pedal shaft14can contact the load receiving part32in a case where the force F exceeds a predetermined value while the rider is biking. The predetermined value is a force value that is greater than a certain value that occurs in a case where the rider is biking. The contact at the location L1can be caused, for example, as the exposed portion26of the pedal shaft14bends slightly within the exposing aperture60dunder the force F from the rider. Thus, by exposing the exposed portion26as discussed herein, and by aligning the contact portion24with the inner surface of the pedal shaft receiving aperture60aas shown, the load from the rider can be distributed to the load receiving part32. A large force applied by the rider occurs, for example, in a case where a large force is applied from the outside of the bicycle. A large force applied from the outside of the bicycle is, for example, an impact force generated in a case of going down the stairs by the bicycle. At least one of the load receiving contact portion60of the load receiving part32and the contact portion24of the pedal shaft14can include resin material. The resin material can be, for example, nylon or POM. The resin material can be advantageous because resin material does not easily wear due to contact with the pedal shaft14, which can be a metal material such as carbon steel or chrome molybdenum steel. The support portion62can include a metallic material. The metallic material can add rigidity and strength to the load receiving part32and to hold the load receiving contact portion60in place when contact is made with the contact portion24of the pedal shaft14. FIGS.7and8show the attachment of the resin tread parts34to the body part30. As illustrated, at least one resin tread part34can be attached to the body part30by at least one threaded fastener36. Here, the at least one threaded fastener36includes a plurality of threaded fasteners36. The at least one resin tread part34includes a first resin tread part34A and a second resin tread part34B. The first resin tread part34A is attached to the first side30fof the body part30by at least some of the plurality of threaded fasteners36. The second resin tread part34B is attached to the second side30gof the body part30by at least some of the plurality of threaded fasteners36. The resin tread parts34are advantageous, for example, because they improve the rider's grip on the bicycle pedal12and do not easily wear due to contact with the rider's shoe. By making the resin tread parts34removably attachable as shown, a rider is able to replace the resin tread parts34or interchangeably use different resin tread parts34as desired. The different resin tread parts34can be, for example, made of different materials or formed with different shapes or surface features. As seen inFIGS.7,8and14to16, each threaded fastener36is configured to attach a resin tread part34to the body part30. Here, at least one resin tread part34has a through-hole66through which a threaded fastener36can pass. More specifically, each resin tread part34has a plurality of through-holes66through which the threaded fasteners36can pass. The body part30further has a plurality of fixing holes68which align with the plurality of the through-holes66when the resin tread part34is placed against the body part30. Some embodiments of the through-holes66of the resin tread part34can have a larger diameter than the corresponding fixing hole68of the body part30which aligns therewith. By inserting a fastener36through each of the through-holes66and into each of the fixing holes68, the resin tread part34can be removably attached to the body part30. The body part30can also include a plurality of fixing holes69that are not used to attach the resin tread part34. The fixing hole69is provided adjacent to the side wall42a. The fixing hole69is arranged laterally outside the first gap42. The threaded fastener36A,36B with a spike portion72A,72B is attached to the fixing hole69, for example. To facilitate attachment of a resin tread part34to the body part30, the resin tread parts34and the body part30have corresponding features which ensure proper alignment. For example, as seen inFIGS.7,8and14to16, each of the fixing holes68has an outer surface68awhich protrudes outwardly from the first side30for the second side30gof the body part30. This protrusion is configured to align with a corresponding indentation66asurrounding a corresponding through-hole66of the resin tread part34. However, this protrusion can be omitted. In the following description, the outer surface68a,168ais described as a projecting outer surface in order to easily distinguish it from other outer surfaces34b,66c,166c. The protruding outer surface68ahas a top wall and side wall. The indentation66ahas a side wall and a bottom wall. The side wall of the indentation66acontacts the side wall of the protruding outer surface68a. The bottom wall of the indentation66acontacts the top wall of the protruding outer surface68a. That is, the indentation66ais provided on the side of the tread portion34facing the body portion30in a case where the tread portion34is attached to the body portion30. Additionally, as seen inFIG.8, a mating surface34aof each resin tread part34is indented with respect to an outer surface34band one or more surface protrusions34c, thus enabling the first side30for second side30gof the body part30to align with the indented mating surface34a. Each resin tread part34also includes a portion34dwhich fills the second gap44of the exit opening40bwhen fully installed. As seen inFIGS.7and8, each resin tread part34is configured to at least partially cover the first side30for the second side30gof the body part30. Here, the first resin tread part34A at least partially covers the first side portion30dand the second side portion30eon the first side30fof the body part30. Likewise, the second resin tread part34B at least partially covers the first side portion30dand the second side portion30eon the second side30gof the body part30. However, the resin tread parts34do not cover the central portion30aof the body part30in the illustrated embodiment, thus enabling the pedal body16to be formed as thin as possible with enough room in the central portion30ato receive the pedal shaft14. As illustrated, the plurality of threaded fasteners36do not need to all be the same. The threaded fasteners36can include one or more first threaded fastener36A, one or more second threaded fastener36B, and one or more third threaded fastener36C. By mixing or rearranging different types of threaded fasteners36, a rider can customize the bicycle pedal12for the best shoe grip. InFIGS.7and8, for example, a plurality of first threaded fasteners36A, a plurality of second threaded fasteners36B, and a plurality of third threaded fasteners36C are used to attach each of the first resin tread part34A and the second resin tread part34B to the body part30. Here, different fastener configurations are used on the first side30fand second side30gof the body part30, thus enabling a rider to alternate between two different configurations by rotating the pedal body16to the opposite side. For example, inFIG.2the majority of the threaded fasteners36are third threaded fasteners36C without spikes (7of12), whereas inFIG.3the majority of the threaded fasteners36are first threaded fasteners36A with spikes (6of12) and second threaded fasteners with spikes (4of12), enabling the rider to alternate between a mostly spiked grip and a mostly non-spiked grip by rotating the pedal body16. As seen inFIGS.14to16, the fixing holes68on opposite sides of the body part30can align with each other. InFIG.14, for example, a first fixing hole68A aligns with a second fixing hole68B, and a third fixing hole68C aligns with fourth fixing hole68D. Thus, a first hole through the first side30fand the second side30gof the body part30includes the first fixing hole68A and the second fixing hole68B, and a second hole through the first side30fand the second side30gof the body part30includes the third fixing hole68C and the fourth fixing hole68D. Here, the fixing holes68on opposite sides of the body part30connect with each other (e.g., the fixing holes68A and68B form a continuous hole through the body part30, and the fixing holes68C and68D form a continuous hole through the body part30) and a threaded inner surface68bextends continuously therethrough. However, the fixing holes68on opposite sides of the body part30do not have to connect, or can include separately spaced apart ones of the threaded inner surfaces68b. By aligning the fixing holes68in this manner and using different types of threaded fasteners36on opposite sides of body part30, the manufacturer or rider can customize each side of body part30, for example, for use with a different type of shoe. When customized in this manner, the rider can rotate the bicycle pedal to accommodate whichever shoe is intended for that side. FIGS.17and18show the first threaded fastener36A in more detail. Here, the first threaded fastener36A includes a threaded portion70A and a spike portion72A. The threaded portion70A and the spike portion72A are located on opposite ends of a fastener axis AF1. The threaded portion70A is configured to screw into a fixing hole68in the body part30. The spike portion72A is configured to protrude in the opposite direction to grip the rider's shoe. This way, the first threaded fastener36A achieves the dual purpose of attaching a resin tread part34to the body part30and providing a spike to grip the rider's shoe. Since the first threaded fasteners36A are removably attached to the body part30via the threaded portion70A, threaded fasteners36with different sizes or types of spike portions72A can be moved or interchanged to suit the needs of the rider using the bicycle pedal12(e.g., the rider can modify the spike location and height as preferred). The threaded portion70A can further include a first threaded section74A and a second threaded section76A. The first threaded section74A has a first diameter, and the second threaded section76A has a second diameter. As seen inFIGS.17and18, the first threaded section74A has a larger diameter than the second threaded section76A. That is, the second threaded section76A has a second diameter that is smaller than the first diameter. The first threaded section74A is configured to be provided in a hole formed in the bicycle pedal12. The second threaded section76A is configured to screw into the bicycle pedal12. As seen inFIGS.15and16, the first threaded section74A is provided at the resin tread part34, and the second threaded section76A is screwed into the body part30. Specifically, as seen inFIGS.15and16, the first threaded section74A locates within a through-hole66of the resin tread part34, and the second threaded section76A is screwed into a corresponding fixing hole68of the body part30. Both the first threaded section74A and the second threaded section76A can include screw threads. At least one screw thread of the first threaded section74A can contact a side wall66bof the through-hole66. At least one screw thread of the second threaded section76A can screw into the fixing hole68. Additionally, at least one screw thread of the first threaded section74A can cut into the side wall66bof the through-hole66to deform the resin material, and at least one screw thread of the second threaded section76A can screw into a threaded inner surface68bcorresponding to a side wall of the fixing hole68. As seen inFIGS.14to16, the spike portion72A protrudes outwardly with respect to the resin tread part34. More specifically, the spike portion72A protrudes outwardly with respect to the resin tread part34when attached to the pedal body16, such that the spike portion72A helps grip the rider's shoe when the rider uses the bicycle pedal12. The spike portion72A can include a circumferential surface78A and a top surface80A. The circumferential surface78A of the spike portion72A can include a plurality of circumferential grooves or at least one spiral groove (not shown inFIGS.17and18). In this way, the spike portion72A can protrude into the treads in the rider's shoe and grip the surfaces of the treads. Here, the top surface80A is shown as a flat surface, but the top surface80A can also include other surfaces or grooves to assist in gripping the rider's shoe. Additionally, although the circumferential surface78A is shown inFIGS.17and18as a forming a straight cylinder, the circumferential surface78A can also be angled as shown for example by the second threaded fastener36B shown inFIGS.19and20. The first threaded fastener36A can include a head portion82A having an abutment surface84A that contacts an outer surface of the resin tread part34. Thus, as seen inFIGS.15and16, when the second threaded section76A screws into the fixing hole68of the body part30, the abutment surface84A contacts the outer surface66csurrounding the through-hole66and presses the resin tread part34into the body part30for secure attachment. The outer surface66chas a side wall and a bottom wall. The bottom wall of the outer surface66ccontacts the abutment surface84A of the head portion82A. That is, the outer surface66cis provided on the stepping surface side of the tread part34in a case where the tread part34is attached to the body part30. The first threaded fastener36A can include an additional abutment surface86A between the first threaded section74A and the second threaded section76A in a fastener direction with respect to a fastener axis AF1of the first threaded fastener36A. The additional abutment surface86A contacts an outer surface of the body part30. This limits the depth of the first threaded fastener36A, placing the spike portion72A at an appropriate height. When the second threaded section76A screws into the fixing hole68of the body part30, the additional abutment surface86A can contact the protruding outer surface68asurrounding the fixing hole68to limit the depth of the first threaded fastener36A in the direction of the fastener axis AF1. The additional abutment surface86A can contact the protruding outer surface68asurrounding the fixing hole68to generate an axial force that fixes the first threaded fastener36A to the body part30. As described above, the protruding outer surface68ahas a top wall and side wall. The additional abutment surface86A can contact the top wall of the protruding outer surface68a. The first threaded fastener36A can include a tool-engagement portion88A located between the spike portion72A and the threaded portion70A. The tool-engagement portion88A can include a plurality of grooves90A which extend parallel to the fastener axis AF1of the first threaded fastener36A. Thus, a tool can be fitted over the spike portion72A and mated with the plurality of grooves90A, enabling attachment or detachment of the first threaded fastener36A by rotation of the tool. The tool-engagement portion88A can have other shapes. The tool-engagement portion88A can have a polygonal shape, such as a hexagonal shape. FIGS.15and16show several first threaded fasteners36A attaching a first resin tread part34A and a second resin tread part34B to a body part30. To attach the resin tread part34to the body part30, the resin tread part34is first placed against the body part30so that one or more through-holes66of the resin tread part34align with one or more fixing holes68of the body part30. Then, the second threaded section76A is screwed into the threaded inner surface68bof the fixing hole68until the additional abutment surface86A abuts the outer surface68aof the body part30surrounding the fixing hole68. At the same time, the first threaded section74A can contact the side wall66bof the through-hole66. Optionally, the first threaded section74A can cut into the side wall66bof the through-hole66(e.g., by about 0.2 mm) and deform the resin for an attachment grip, thus restricting the first threaded fastener36A from loosening. Here, the first threaded section74A is dimensioned so that the abutment surface84A contacts an outer surface66cof the resin tread part34surrounding the through-hole66. Thus, the first threaded section74A is configured to be provided at a tread part34of the bicycle pedal12, and the second threaded section76A is configured to be screwed into the body part30of the bicycle pedal12. Additionally, the spike portion72A is configured to protrude outwardly with respect to the tread part34. As seen inFIGS.15and16, the spike portion72A is configured to protrude outwardly from an indentation66dsurrounding the through-hole66, thus hiding the head portion82A within the indentation66dso that only the spike portion72A is contacted by the rider's shoe. InFIG.15, the abutment surface84A of the head portion82A contacts the outer surface66c. However, the additional abutment surface86A does not contact the protruding outer surface68a. The first threaded fastener36A is tightened until the additional abutment surface86A contacts the outer surface66c. In that case, the abutment surface84A is pressed against the outer surface66ceven after contacting the outer surface66c. Thus, the outer surface66cis deformed by the abutment surface84A. This deformation can be within the range of elastic deformation of the outer surface66c, for example. FIGS.19and20show the second threaded fastener36B in more detail. Here, the second threaded fastener36B includes a threaded portion70B and a spike portion72B. The threaded portion70B and the spike portion72B are located on opposite ends of a fastener axis AF2. The threaded portion70B is configured to screw into a fixing hole68in the body part30. The spike portion72B is configured to protrude in the opposite direction to grip the rider's shoe. This way, the second threaded fastener36B achieves the dual purpose of attaching a resin tread part34to the body part30and providing a spike to grip the rider's shoe. Since the second threaded fasteners36B are removably attached to the body part30via the threaded portion70B, different threaded fasteners36with different sizes or types of spike portions70B can be moved or interchanged to suit the needs of the rider using the bicycle pedal12. Here, the threaded portion70B includes a single diameter. Thus, as seen inFIG.14, the threaded portion70B passes through both the through-hole66of the resin tread part34and the fixing hole68of the body part30. At least one screw thread of the threaded portion70B can contact the side wall66bof the through-hole66, and at least one screw thread of the threaded portion70B can screw into the fixing hole68. Additionally, at least one screw thread of the threaded portion70B can cut into the side wall66bof the through-hole66to deform the resin material, and at least one screw thread of the threaded portion70B can screw into a threaded inner surface68bcorresponding to a side wall of the fixing hole68. In an alternative embodiment, the threaded portion70B can include multiple sections with different diameters, for example, as demonstrated by the first threaded section74A and the second threaded section76A of the first threaded fastener36A discussed above. As seen inFIG.14, the spike portion72B protrudes outwardly with respect to the resin tread part34. More specifically, the spike portion72B protrudes outwardly with respect to the resin tread part34when attached to the pedal body16, such that the spike portion72B helps grip the rider's shoe when the rider uses the bicycle pedal12. The spike portion72B can include a circumferential surface78B and a top surface80B. Here, the circumferential surface78B includes angled sidewalls to form a conical shape. The circumferential surface78B can also include a plurality of circumferential grooves or at least one spiral groove just like the circumferential surface78A of the first threaded fastener36A. In this way, the spike portion72B can protrude into the treads in the rider's shoe and grip the surfaces of the treads. Here, the top surface80B is shown as having rounded corners and a flat surface, but the top surface80B can also include other surfaces or grooves to assist in gripping the rider's shoe. The second threaded fastener36B further includes a head portion82B having an abutment surface84B that contacts an outer surface of the resin tread part34. Thus, as seen inFIG.14, when the threaded portion70B screws into the fixing hole68of the body part30, the abutment surface84B contacts the outer surface66csurrounding the through-hole66and presses the resin tread part34into the body part30for secure attachment. Here, the head portion82B further includes a plurality of indentations85B around the perimeter thereof. The abutment surface84B can contact the outer surface66csurrounding the through-hole66to generate an axial force that fixes the second threaded fastener36B to the body part30. The second threaded fastener36B can further include a tool-engagement portion88B located between the spike portion72B and the threaded portion70B. The tool-engagement portion88B can include a plurality of grooves90B which extend parallel to the fastener axis AF2of the second threaded fastener36B. Thus, a tool can be fitted over the spike portion72B and mated with the plurality of grooves90B, enabling attachment or detachment of the second threaded fastener36B by rotation of the tool. Here, the head portion82B further includes a plurality of indentations85B around the perimeter thereof. The plurality of indentations85B has the effect of reducing the weight of the fastener. In addition, the plurality of indentations85B can have a function of a tool-engagement portion. The plurality of indentations85B and the plurality of grooves90B have a different profile with respect to each other. Here, the different profile includes at least one of different size and different shape. By having two tool-engagement portions, even if one tool-engagement portion breaks, another tool-engagement portion can be used. FIG.14shows two second threaded fasteners36B attaching a first resin tread part34A and a second resin tread part34B to a body part30. To attach the resin tread part34to the body part30, the resin tread part34is first placed against the body part30so that one or more through-holes66of the resin tread part34align with one or more fixing holes68of the body part30. Then, the threaded portion70B is screwed into the threaded inner surface68bof the fixing hole68until the abutment surface84B contacts an outer surface66cof the resin tread part34surrounding the through-hole66. As shown, the spike portion72B is configured to protrude outwardly with respect to the tread part34when fully installed. As seen inFIG.14, the spike portion72B is configured to protrude outwardly from an indentation66dsurrounding the through-hole66, thus hiding the head portion82B within the indentation66dso that only the spike portion72B is contacted by the rider's shoe. FIGS.21and22show the third threaded fastener36C in more detail. Here, the third threaded fastener36C includes a threaded portion70C and a tool-engagement portion88C located on opposite ends of a fastener axis AF3. The threaded portion70C is configured to screw into a fixing hole68in the body part30. The tool-engagement portion88C has a short height without a spike portion as included by the first threaded fastener36A and second threaded fastener36B. By using a tool-engagement portion88C without a spike portion, the third threaded fastener36C can be interchanged with the first threaded fastener36A or second threaded fastener36B to enable a rider to remove a spike from a location on the bicycle pedal12while still keeping the resin tread part34attached to the body part30at that location. As seen inFIG.16, the height of the tool-engagement portion88C of the third threaded fastener36C allows most or all of the tool-engagement portion88C to be located within the indentation66dsurrounding the through-hole66of the resin tread part34, thus restricting the third threaded fastener36C from interfering with the rider's shoe. Here, the tool-engagement portion88C includes a top hexagonal indentation to receive a corresponding tool, but other tool engagement surfaces are also possible. The threaded portion70C can further include a first threaded section74C and a second threaded section76C. As seen inFIGS.21and22, the first threaded section74C has a larger diameter than the second threaded section76C. When the third threaded fastener36C attaches a resin tread part34to the body part30, the first threaded section74C is provided at the resin tread part64, and the second threaded section76C is screwed into the body part30. Specifically, as seen inFIG.16, the first threaded section74C locates within a through-hole66of the resin tread part34, and the second threaded section76C is screwed into a corresponding fixing hole68of the body part30. Both the first threaded section74C and the second threaded section76C can include screw threads. At least one screw thread of the first threaded section74C can contact the side wall66bof the through-hole66, and at least one screw thread of the second threaded section76C can screw into the fixing hole68. Additionally, at least one screw thread of the first threaded section74C can cut into the side wall66bof the through-hole66to deform the resin material, and at least one screw thread of the second threaded section76C can screw into a threaded inner surface68bcorresponding to a side wall of the fixing hole68. The third threaded fastener36C further includes a head portion82C having an abutment surface84C that contacts an outer surface of the resin tread part34. Thus, as seen inFIG.16, when the second threaded section76C screws into the fixing hole68of the body part30, the abutment surface84C contacts the outer surface66csurrounding the through-hole66and presses the resin tread part34into the body part30for secure attachment. The third threaded fastener36C further includes an additional abutment surface86C between the first threaded section74C and the second threaded section76C in a fastener direction with respect to a fastener axis AF3. The additional abutment surface86C contacts an outer surface of the body part30, which limits the depth of the third threaded fastener36C. When the second threaded section76C screws into the fixing hole68of the body part30, the additional abutment surface86C contacts the protruding outer surface68asurrounding the fixing hole68to limit the depth of the third threaded fastener36C in the direction of the fastener axis AF3. The additional abutment surface86C can contact the protruding outer surface68asurrounding the fixing hole68to generate an axial force that fixes the first threaded fastener36C to the body part30. FIG.16show a third threaded fastener36C attaching a first resin tread part34A to a body part30. To attach the resin tread part34to the body part30, the resin tread part34is first placed against the body part30so that one or more through-holes66of the resin tread part34align with one or more fixing holes68of the body part30. Then, the second threaded section76C is screwed into the threaded inner surface68bof the fixing hole68until the additional abutment surface86C abuts the outer surface68aof the body part30surrounding the fixing hole68. At the same time, the first threaded section74C can contact the side wall66bof the through-hole66. Optionally, the first threaded section74C can cut into the side wall66bof the through-hole66(e.g., by about 0.2 mm) and deform the resin for an attachment grip, thus restricting the third threaded fastener36C from loosening. Here, the first threaded section74C is dimensioned so that the abutment surface84C contacts an outer surface66cof the resin tread part34surrounding the through-hole66. Thus, the first threaded section74C is configured to be provided at a tread part34of the bicycle pedal12, and the second threaded section76C is configured to be screwed into the body part of the bicycle pedal12. InFIGS.14and16, the abutment surface84C of the head portion82C contacts the outer surface66c. However, the additional abutment surface86C does not contact the protruding outer surface68a. The first threaded fastener36C is tightened until the additional abutment surface86C contacts the outer surface66c. In that case, the abutment surface84C is pressed against the outer surface66ceven after contacting the outer surface66c. Thus, the outer surface66cis deformed by the abutment surface84C. This deformation can be within the range of elastic deformation of the outer surface66c, for example. The first threaded fastener36A and second threaded fastener36B discussed herein can also be referred to as “spike pins” for a bicycle pedal12. Thus, for example, a spike pin for a bicycle pedal12can comprise a spike portion72A,72B, a threaded portion70A,70B, and a tool-engagement portion88A,88B. The spike portion72A,72B can be configured to protrude outwardly with respect to a tread part34of the bicycle pedal12. The threaded portion70A,70B can be configured to screw into the bicycle pedal12. The tool-engagement portion88A,88B can be located between the spike portion72A,72B and the threaded portion70A,70B. The rest of the features discussed above can also be included in the spike pin and descriptions are omitted for brevity. Referring now toFIGS.23to28, a bicycle pedal112in accordance with a second embodiment will be explained. In view of the similarity between the first and second embodiments, the parts of the second embodiment that are identical to the parts of the first embodiment will be given the same reference numerals as the parts of the first embodiment. Moreover, the descriptions of the parts of the second embodiment that are identical to the parts of the first embodiment may be omitted for the sake of brevity. The main difference between the bicycle pedal112ofFIGS.23to28and the bicycle pedal12ofFIGS.1to16is that the bicycle pedal112uses an alternative body part130, resin tread parts134, and threaded fasteners136. It should be understood by those of ordinary skill in the art from this disclosure that any of the features of bicycle pedal112can be added to the bicycle pedal12of the first embodiment, and vice versa. The body part130includes fixing holes168which differ in geometry from the fixing holes68of the body part30. Like the previous embodiment, the protruding outer surfaces168aof the fixing holes168can protrude outwardly from the first side30for second side30gof the body part130. The protruding outer surfaces168ahas a side wall and a top wall. Here, however, the side wall of the protruding outer surface168aof the fixing hole168is angled inwardly in a direction away from the body part130to create a conical shape. The body part130can also include fixing holes169that are not used to attach the resin tread part134. The threaded fastener136A,136B with a spike portion172A,172B is attached to the fixing hole169, for example. Likewise, the resin tread part134includes a plurality of through-holes166which differ from the through-holes66of the resin tread part34. Like the indentation66a, an indentation166aalso has a side wall. Like the protruding outer surface68a, a protruding outer surface168aalso has a side wall and top wall. As seen inFIG.27, each through-hole166has the side wall of an indentation166awhich is angled to substantially match the angle of the side wall of a protruding outer surface168aof a corresponding fixing hole168. Thus, the bicycle pedal112differs from the bicycle pedal12in how the resin tread part134aligns with the body part130. FIG.25shows the attachment of the resin tread parts134to the body part130. As illustrated, the resin tread parts134are attached to the body part130by the at least one threaded fastener136. Here, the at least one threaded fastener136includes a plurality of the threaded fasteners136, and the at least one resin tread part134includes a first resin tread part134A and a second resin tread part134B. As illustrated, the threaded fasteners136do not need to all be the same. The threaded fasteners136can include one or more first threaded fastener136A, one or more second threaded fastener136B, or one or more third threaded fastener136C. As discussed above, by mixing or rearranging different types of threaded fasteners136, a rider can customize the bicycle pedal112for the best shoe grip. FIGS.29and30show the first threaded fastener136A in more detail. Here, the first threaded fastener136A includes a threaded portion170A and a spike portion172A. The threaded portion170A and the spike portion172A are located on opposite ends of a fastener axis AF4. The threaded portion170A is configured to screw into a fixing hole168in the body part130. The spike portion172A is configured to protrude in the opposite direction to grip the rider's shoe. This way, the first threaded fastener136A achieves the dual purpose of attaching a resin tread part134to the body part130and providing a spike to grip the rider's shoe. As seen inFIGS.26to28, the spike portion172A is configured to protrude outwardly with respect to the resin tread part134, such that the spike portion172A helps grip the rider's shoe when the rider uses the bicycle pedal112. The spike portion172A can include a circumferential surface178A and a top surface180A. The circumferential surface178A of the spike portion172A can include a plurality of circumferential grooves or at least one spiral groove (not shown inFIGS.29and30). In this way, the spike portion172A can protrude into the treads in the rider's shoe and grip the surfaces of the treads. Here, the top surface180A is shown as a flat surface, but the top surface180A can also include other surface or grooves to assist in gripping the rider's shoe. Additionally, although the circumferential surface178A is shown inFIGS.29and30as a forming a straight cylinder, the circumferential surface178A can also be angled as shown for example by the second threaded fastener136B shown inFIGS.31and32. The first threaded fastener136A can further include a head portion182A having an abutment surface184A that contacts the outer surfaces166c,168aof the body part130and the resin tread part134. The abutment surface184A contacts a bottom wall of the outer surface166c. The abutment surface184A contacts a top wall of the protruding outer surface168a. Similar to the first embodiment, the abutment surface184A first contacts the outer surface166c. The abutment surface184A then deforms the outer surface166c. Thereafter, abutment surface184A contacts protruding outer surface168a. Here, the head portion182A includes a plurality of indentations185A, such that the head portion182A has a smaller inner radius R1which extends from the fastener axis AF4to the center of the indentations185A, and a larger outer radius R2which extends from the fastener axis AF4to the perimeter of the head portion182A between the indentations185A. As seen inFIG.27, when attaching a resin tread part134to a body part130, this configuration allows the portion of the abutment surface184A within the smaller inner radius R1to contact the protruding outer surface168aof the body part130surrounding the fixing hole168, while the portion of the abutment surface184A between the smaller inner radius R1and the larger outer radius R2contacts the surface of the resin tread part134surrounding the through-hole166, thus pressing the resin tread part134into the body part130. The portion of the abutment surface184A within the smaller inner radius R1can generate an axial force that fixes the first threaded fastener136A to the body part130. The portion of the abutment surface184A between the smaller inner radius R1and the larger outer radius R2presses the resin tread part134into the body part130for secure attachment. The first threaded fastener136A can further include a tool-engagement portion188A located between the spike portion172A and the threaded portion170A. The tool-engagement portion188A can include a plurality of grooves190A which extend parallel to the fastener axis AF4of the first threaded fastener136A. Thus, a tool can be fitted over the spike portion172A and mated with the plurality of grooves190A, enabling attachment or detachment of the first threaded fastener136A by rotation of the tool. Here, the tool-engagement portion188A includes a first tool-engagement portion192A and a second tool-engagement portion194A arranged in a fastener axial direction with respect to a fastener axis AF4of the first threaded fastener136A. As shown inFIGS.29and30, the first tool-engagement portion192A and the second tool-engagement portion194A have a different profile with respect to each other. Here, the profiles are different in shape and size. For example, the plurality of grooves190A differ in shape and thickness within the first tool-engagement portion192A in comparison to the second tool-engagement portion194A, with the protrusions196A surrounding the grooves190A having an increased thickness and triangular shape at the second tool-engagement portion194A. By using a first tool-engagement portion192A and a second tool-engagement portion194A having different profiles in this manner, multiple different types of tools can be used to remove the first threaded fastener136A from the body part130, which can be advantageous if one of the sections of the groove190A or protrusions196A breaks during installation or use. The multiple different types of tools include both the same type of tools that differ in size and different types of tools. FIGS.30and31show the second threaded fastener136B in more detail. Here, the second threaded fastener136B includes a threaded portion170B and a spike portion172B. The threaded portion170B and the spike portion172B are located on opposite ends of a fastener axis AF5. The threaded portion170B is configured to screw into a fixing hole168in the body part130. The spike portion172B is configured to protrude in the opposite direction to grip the rider's shoe. This way, the second threaded fastener136B achieves the dual purpose of attaching a resin tread part134to the body part130and providing a spike to grip the rider's shoe. As seen inFIGS.26to28, the spike portion172B is configured to protrude outwardly with respect to the resin tread part134, such that the spike portion172B helps grip the rider's shoe when the rider uses the bicycle pedal112. The spike portion172B can include a circumferential surface178B and a top surface180B. Here, the circumferential surface178B includes angled sidewalls to form a conical shape. The circumferential surface178B of the spike portion172B can also include a plurality of circumferential grooves or at least one spiral groove (not shown inFIGS.31and32). In this way, the spike portion172B can protrude into the treads in the rider's shoe and grip the surfaces of the treads. Here, the top surface180B is shown as having rounded corners and a flat surface, but the top surface180B can also include other surfaces or grooves to assist in gripping the rider's shoe. The second threaded fastener136B can further include a head portion182B having an abutment surface184B that contacts the outer surfaces166c,168aof the body part130and the resin tread part134. The abutment surface184B contacts a bottom wall of the outer surface166c. The abutment surface184B contacts a top wall of the protruding outer surface168a. Similar to the first embodiment, the abutment surface184B first contacts the outer surface166c. The abutment surface184B then deforms the outer surface166c. Thereafter, the abutment surface184B contacts protruding outer surface168a. Here, the head portion182B includes a plurality of indentations185B, such that the head portion182B has a smaller inner radius R3which extends from the fastener axis AF5to the center of the indentations185B, and a larger outer radius R4which extends from the fastener axis AF5to the perimeter of the head portion182B between the indentations185B. When attaching a resin tread part134to a body part130, this configuration allows the portion of the abutment surface184B within the smaller inner radius R3to contact the protruding outer surface168aof the body part130surrounding the fixing hole168, while the portion of the abutment surface184B between the smaller inner radius R3and the larger outer radius R4contacts the surface of the resin tread part134surrounding the through-hole166, thus pressing the resin tread part134into the body part130. The portion of the abutment surface184B within the smaller inner radius R3can generate an axial force that fixes the second threaded fastener136B to the body part130. The portion of the abutment surface184B between the smaller inner radius R3and the larger outer radius R4presses the resin tread part134into the body part130for secure attachment. The second threaded fastener136B can further include a tool-engagement portion188B located between the spike portion172B and the threaded portion170B. The tool-engagement portion188B can include a plurality of grooves190B which extend parallel to the fastener axis AF5of the second threaded fastener136B. Thus, a tool can be fitted over the spike portion172B and mated with the plurality of grooves190B, enabling attachment or detachment of the second threaded fastener136B by rotation of the tool. Here, the tool-engagement portion188B includes a first tool-engagement portion192B and a second tool-engagement portion194B arranged in a fastener axial direction with respect to a fastener axis AF5of the second threaded fastener136B. As shown inFIGS.31and32, the first tool-engagement portion192B and the second tool-engagement portion194B have a different profile with respect to each other. Here, the profiles are different in shape and size. For example, the plurality of grooves190B differ in shape and thickness within the first tool-engagement portion192B in comparison to the second tool-engagement portion194B, with the protrusions196B surrounding the grooves190B having an increased thickness and triangular shape at the second tool-engagement portion194B. By using a first tool-engagement portion192B and a second tool-engagement portion194B having different profiles in this manner, multiple different types of tools can be used to remove the second threaded fastener136B from the body part130, which can be advantageous if one of the sections of the groove190B or protrusions196B break during installation or use. FIGS.33and34show the third threaded fastener136C in more detail. Here, the third threaded fastener136C includes a threaded portion170C and a tool-engagement portion188C. The threaded portion170C and the tool-engagement portion188C are located on opposite ends of a fastener axis AF6. The threaded portion170C is configured to screw into a fixing hole168in the body part130. The tool-engagement portion188C has a short height without a spike portion as included by the first threaded fastener136A and second threaded fastener136B. By using a tool-engagement portion188C without a spike portion, the third threaded fastener136C can be interchanged with the first threaded fastener136A or the second threaded fastener136B to enable a rider to remove a spike from a location on the bicycle pedal112while still keeping the resin tread part134attached to the body part130at that location. As seen inFIG.28, the height of the tool-engagement portion188C of the third threaded fastener136C allows most or all of the tool-engagement portion188C to be located within an indentation surrounding the through-hole166of the resin tread part134, thus restricting the third threaded fastener136C from interfering with the rider's shoe. Here, the tool-engagement portion188C includes a top hexagonal indentation to receive a corresponding tool, but other tool engagement surfaces are also possible. The third threaded fastener136C can further include a head portion182C having an abutment surface184C that contacts the outer surfaces166c,168aof the body part130and the resin tread part134. The abutment surface184C contacts a bottom wall of the outer surface166c. The abutment surface184C contacts a top wall of the protruding outer surface168a. Similar to the first embodiment, the abutment surface184C first contacts the outer surface166c. The abutment surface184C then deforms the outer surface166c. Thereafter, abutment surface184C contacts protruding outer surface168a. Here, the head portion182C includes a plurality of indentations185C, such that the head portion182C has a smaller inner radius R5which extends from the fastener axis AF6to the center of the indentations185C, and a larger outer radius R6which extends from the fastener axis AF6to the perimeter of the head portion182C between the indentations185C. When attaching a resin tread part134to a body part130, this configuration allows the portion of the abutment surface184C within the smaller inner radius R5to contact the protruding outer surface168aof the body part130surrounding the fixing hole168, while the portion of the abutment surface184C between the smaller inner radius R5and the larger outer radius R6contacts the surface of the resin tread part134surrounding the through-hole166, thus pressing the resin tread part134into the body part130. The portion of the abutment surface184C within the smaller inner radius R5can generate an axial force that fixes the first threaded fastener136C to the body part130. The portion of the abutment surface184C between the smaller inner radius R5and the larger outer radius R6presses the resin tread part134into the body part130for secure attachment. The first threaded fastener136A and second threaded fastener136B discussed herein can also be referred to as “spike pins” for a bicycle pedal12. Thus, for example, a spike pin for a bicycle pedal112can comprise a spike portion172A,172B, a threaded portion170A,170B, and a tool-engagement portion188A,188B. The spike portion172A,172B can be configured to protrude outwardly with respect to a tread part134of the bicycle pedal112. The threaded portion170A,170B can be configured to screw into the bicycle pedal112. The tool-engagement portion188A,188B can be located between the spike portion172A,172B and the threaded portion170A,170B. The tool-engagement portion188A,188B can include a first tool-engagement portion192A,192B and a second tool-engagement portion194A,194B arranged in a spike pin axial direction with respect to a spike pin axis of the spike pin. The first tool-engagement portion192A,192B and the second tool-engagement portion194A,194B have a different profile with respect to each other. Here, the different profile includes at least one of different size and different shape. By having two tool-engagement portions, even if one tool-engagement portion breaks, another tool-engagement portion can be used. The rest of the features discussed above can also be included in the spike pin and descriptions are omitted for brevity. The shape of screws that secure the pedal shaft14to the bicycle crank18are specified by ISO standards. In this embodiment, the screw that secure the pedal shaft14to the bicycle crank18is the outer thread20a. Also, the lip20cof the pedal shaft14generally has a same diameter for compatibility. For example, the thickness of the thin bicycle pedal12,112is smaller than the diameter of the lip20cof the pedal shaft14. In this embodiment, the diameter of the lip20ccan be 18 mm. The pedal shaft diameter of the bearing portion is, for example, equal to or larger than 6.5 mm. The pedal shaft diameter of the bearing portion is more preferably equal to or larger than 6.7 mm, for example. The pedal shaft diameter of the bearing portion is, for example, equal to or smaller than 13 mm. If the pedal shaft diameter of the bearing portion is smaller than 8 mm without the load receiving part32, the pedal shaft14may be broken. The stress concentration on the pedal shaft14is likely to occur at a portion where the shaft diameter changes between short axial lengths. In other words, the stress concentration is likely to occur where a step is formed on the pedal shaft14in an axial direction. Also, stress concentration can be reduced by making this step a curved surface in the axial direction. The stress concentration can be reduced by providing an R-shaped corner between the wall surface of the step and the outer peripheral surface of the pedal shaft14having a small diameter. In understanding the scope of the present invention, the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. The foregoing also applies to words having similar meanings such as the terms, “including”, “having” and their derivatives. Also, the terms “part,” “section,” “portion,” “member” or “element” when used in the singular can have the dual meaning of a single part or a plurality of parts unless otherwise stated. As used herein, the following directional terms “frame facing side”, “non-frame facing side”, “forward”, “rearward”, “front”, “rear”, “up”, “down”, “above”, “below”, “upward”, “downward”, “top”, “bottom”, “side”, “vertical”, “horizontal”, “perpendicular” and “transverse” as well as any other similar directional terms refer to those directions of a human-powered vehicle field (e.g., bicycle) in an upright, riding position and equipped with the bicycle pedal. Accordingly, these directional terms, as utilized to describe the bicycle pedal should be interpreted relative to a human-powered vehicle field (e.g., bicycle) in an upright riding position on a horizontal surface and that is equipped with the bicycle pedal. The terms “left” and “right” are used to indicate the “right” when referencing from the right side as viewed from the rear of the human-powered vehicle field (e.g., bicycle), and the “left” when referencing from the left side as viewed from the rear of the human-powered vehicle field (e.g., bicycle). The phrase “at least one of” as used in this disclosure means “one or more” of a desired choice. For one example, the phrase “at least one of” as used in this disclosure means “only one single choice” or “both of two choices” if the number of its choices is two. For another example, the phrase “at least one of” as used in this disclosure means “only one single choice” or “any combination of equal to or more than two choices” if the number of its choices is equal to or more than three. Also, it will be understood that although the terms “first” and “second” may be used herein to describe various components, these components should not be limited by these terms. These terms are only used to distinguish one component from another. Thus, for example, a first component discussed above could be termed a second component and vice versa without departing from the teachings of the present invention. The term “attached” or “attaching”, as used herein, encompasses configurations in which an element is directly secured to another element by affixing the element directly to the other element; configurations in which the element is indirectly secured to the other element by affixing the element to the intermediate member(s) which in turn are affixed to the other element; and configurations in which one element is integral with another element, i.e. one element is essentially part of the other element. This definition also applies to words of similar meaning, for example, “joined”, “connected”, “coupled”, “mounted”, “bonded”, “fixed” and their derivatives. Finally, terms of degree such as “substantially”, “about” and “approximately” as used herein mean an amount of deviation of the modified term such that the end result is not significantly changed. While only selected embodiments have been chosen to illustrate the present invention, it will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the scope of the invention as defined in the appended claims. For example, unless specifically stated otherwise, the size, shape, location or orientation of the various components can be changed as needed and/or desired so long as the changes do not substantially affect their intended function. Unless specifically stated otherwise, components that are shown directly connected or contacting each other can have intermediate structures disposed between them so long as the changes do not substantially affect their intended function. The functions of one element can be performed by two, and vice versa unless specifically stated otherwise. The structures and functions of one embodiment can be adopted in another embodiment. It is not necessary for all advantages to be present in a particular embodiment at the same time. Every feature which is unique from the prior art, alone or in combination with other features, also should be considered a separate description of further inventions by the applicant, including the structural and/or functional concepts embodied by such feature(s). Thus, the foregoing descriptions of the embodiments according to the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. | 78,657 |
11858588 | DESCRIPTION OF THE EMBODIMENTS The embodiment(s) will now be described with reference to the accompanying drawings, wherein like reference numerals designate corresponding or identical elements throughout the various drawings. As seen inFIG.1, a human-powered vehicle2includes a vehicle body4and a drive train6. The drive train6includes a rear sprocket assembly10and a rear hub assembly12. The rear hub assembly12is secured to the vehicle body4. The rear sprocket assembly10is configured to be mounted to the rear hub assembly12for the human-powered vehicle2. The rear sprocket assembly10is rotatably supported by the rear hub assembly12relative to the vehicle body4about a rotational center axis A1. The human-powered vehicle2has an axial center plane CP. The axial center plane CP is defied in a transverse center position of the vehicle body4of the human-powered vehicle2. The axial center plane CP is perpendicular to the rotational center axis A1. The drive train6includes a crank assembly6A, a front sprocket6B, and a chain C. The crank assembly6A is rotatably mounted to the vehicle body4. The front sprocket6B is secured to crank assembly6A. The chain C is engaged with the front sprocket6B and the rear sprocket assembly10to transmit pedaling force from the front sprocket6B to the rear sprocket assembly10. The front sprocket6B includes a single sprocket wheel in the present embodiment. However, the front sprocket6B can include a plurality of sprocket wheels. In the present application, the following directional terms “front,” “rear,” “forward,” “rearward,” “left,” “right,” “transverse,” “upward” and “downward” as well as any other similar directional terms refer to those directions which are determined on the basis of a user (e.g., a rider) who is in the user's standard position (e.g., on a saddle or a seat) in the human-powered vehicle2with facing a handlebar or steering. Accordingly, these terms, as utilized to describe the rear sprocket assembly10, the rear hub assembly12, or other components, should be interpreted relative to the human-powered vehicle2equipped with the rear sprocket assembly10, the rear hub assembly12, or other components as used in an upright riding position on a horizontal surface. In the present application, a human-powered vehicle includes a various kind of bicycles such as a mountain bike, a road bike, a city bike, a cargo bike, a hand bike, and a recumbent bike. Furthermore, the human-powered vehicle includes an electric bike (E-bike). The electric bike includes an electrically assisted bicycle configured to assist propulsion of a vehicle with an electric motor. However, a total number of wheels of the human-powered vehicle is not limited to two. For example, the human-powered vehicle includes a vehicle having one wheel or three or more wheels. Especially, the human-powered vehicle does not include a vehicle that uses only an internal-combustion engine as motive power. Generally, a light road vehicle, which includes a vehicle that does not require a driver's license for a public road, is assumed as the human-powered vehicle. As seen inFIG.2, the rear sprocket assembly10includes a plurality of rear sprockets SP. The plurality of rear sprockets SP is configured to engage with a chain C. The plurality of rear sprockets SP includes first to twelfth sprockets SP1to SP12. Namely, the rear sprocket assembly10comprises the first sprocket SP1and the second sprocket SP2. However, the total number of the plurality of sprockets SP is not limited to twelve. The rear hub assembly12includes a hub axle14, a hub body16, and a sprocket support body18. The hub axle14is configured to be secured to the vehicle body4(see e.g.,FIG.1) of the human-powered vehicle2. The hub body16is rotatably mounted on the hub axle14about the rotational center axis A1. The sprocket support body18is rotatably mounted on the hub axle14about the rotational center axis A1. The rear sprocket assembly10is configured to be mounted to the sprocket support body18. The sprocket support body18includes a plurality of external spline teeth18A. The rear sprocket assembly10is configured to engage with the plurality of external spline teeth18A of the sprocket support body18. As seen inFIG.3, the rear hub assembly12includes a ratchet structure20The ratchet structure20is configured to allow the sprocket support body18to rotate relative to the hub body16about the rotational center axis A1in only one rotational direction. The ratchet structure20is configured to restrict the sprocket support body18from rotating relative to the hub body16about the rotational center axis A1in the other rotational direction. The first sprocket SP1has a first sprocket outer diameter DM1. The second sprocket SP2has a second sprocket outer diameter DM2larger than the first sprocket outer diameter DM1. The second sprocket SP2is adjacent to the first sprocket SP1without another sprocket between the first sprocket SP1and the second sprocket SP2in an axial direction D1with respect to the rotational center axis A1. The second sprocket SP2can also be referred to as an adjacent sprocket SP2. Thus, the plurality of rear sprockets SP includes the adjacent sprocket SP2. The first sprocket outer diameter DM1is the smallest among outer diameters of the first to twelfth sprockets SP1to SP12in the present embodiment. Thus, the first sprocket SP1can also be referred to as a smallest sprocket SP1. Thus, the first sprocket SP1is a smallest sprocket SP1in the rear sprocket assembly10. The first sprocket SP1can also be referred to as a top-gear sprocket SP1. The adjacent sprocket SP2is adjacent to the smallest sprocket SP1without another sprocket between the adjacent sprocket SP2and the smallest sprocket SP1in the axial direction D1. The third sprocket SP3has a third sprocket outer diameter DM3which is larger than the second sprocket outer diameter DM2. The third sprocket SP3is adjacent to the second sprocket SP2without another sprocket between the second sprocket SP2and the third sprocket SP3in the axial direction D1. The rear sprocket assembly10includes a sprocket carrier22. The sixth to twelfth sprockets SP6to SP12are mounted on the sprocket carrier22. The sixth to twelfth sprockets SP6to SP12are secured to the sprocket carrier22with fasteners24such as rivets in the present embodiment. However, a total number of sprockets secured to the sprocket carrier22is not limited to the embodiment illustrated inFIG.3. The sprocket carrier22is configured to be in contact with a positioning surface18C of the sprocket support body18. However, the structure of the sprocket carrier22is not limited to the structure illustrated inFIG.3. The sprocket carrier22can be omitted from the rear sprocket assembly10if needed and/or desired. In such a case, all of the sprockets directly engage with the sprocket support body18. As seen inFIG.4, the first sprocket SP1includes a first sprocket body SP11, a plurality of first sprocket teeth SP12, and a first sprocket opening SP13. The plurality of first sprocket teeth SP12extends radially outwardly from the first sprocket body SP11in a radial direction with respect to the rotational center axis A1of the rear sprocket assembly10. The plurality of first sprocket teeth SP12define the first sprocket outer diameter DM1. The first sprocket opening SP13of the first sprocket SP1has a first diameter DM11. The first sprocket SP1has a first radially minimum portion SP19defining the first radially minimum diameter DM11of the first sprocket opening SP13. In the present embodiment, a total number of the first sprocket teeth SP12is nine. However, the total number of the first sprocket teeth SP12is not limited to nine. As seen inFIG.5, the second sprocket SP2includes a second sprocket body SP21, a plurality of second sprocket teeth SP22, and a second sprocket opening SP23. The plurality of second sprocket teeth SP22extends radially outwardly from the second sprocket body SP21in the radial direction. The plurality of second sprocket teeth SP22defines the second sprocket outer diameter DM2. The second sprocket opening SP23of the second sprocket SP2has a second diameter DM21. In the present embodiment, a total number of the second sprocket teeth SP22is ten. However, the total number of the second sprocket teeth SP22is not limited to ten. As seen inFIG.6, the first sprocket opening SP13is configured to receive the hub axle14of the rear hub assembly12in a mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. The first diameter DM11is smaller than an outermost diameter DM6of the sprocket support body18of the rear hub assembly12. The first diameter DM11can also be referred to as a first radially minimum diameter DM11. Thus, the first sprocket opening SP13has the first radially minimum diameter DM11that is smaller than the outermost diameter DM6of the sprocket support body18of the rear hub assembly12. The plurality of spline teeth18A define the outermost diameter DM6. However, the first diameter DM11can be larger than or equal to the outermost diameter DM6of the sprocket support body18if needed and/or desired. The second sprocket opening SP23is configured to receive the hub axle14of the rear hub assembly12in the mounting state. The second diameter DM21is smaller than the outermost diameter DM6of the sprocket support body18of the rear hub assembly12. The second diameter DM21is larger than the first diameter DM11. However, the second diameter DM21can be smaller than or equal to the first diameter DM11if needed and/or desired. The second diameter DM21can be larger than or equal to the outermost diameter DM6of the sprocket support body18if needed and/or desired. The sprocket support body18includes an axial end18B provided on an axial outermost end of the sprocket support body18in the axial direction D1. The hub axle14includes an axial end14B provided on an axial outermost end of the hub axle14in the axial direction D1. The first sprocket SP1is configured to be provided between the axial ends14B and18B in the axial direction D1. The second sprocket SP2is configured to be provided between the axial ends14B and18B in the axial direction D1. The rear sprocket assembly10comprises a lock device26. The lock device26is configured to fix the rear sprocket assembly10to the sprocket support body18of the rear hub assembly12in the mounting state. The lock device26is configured to mount the first sprocket SP1and the second sprocket SP2to the rear hub assembly12. As seen inFIG.3, the lock device26is configured to be attached to the sprocket support body18to hold the sprocket carrier22and the first to fifth sprockets SP1to SP5between the lock device26and the positioning surface18C of the sprocket support body18in the axial direction D1. As seen inFIG.6, the lock device26includes an axially inward end26A and an axially outward end26B. The axially outward end26B is opposite to the axially inward end26A in the axial direction D1. The lock device26for mounting the plurality of rear sprockets SP to the rear hub assembly12for the human-powered vehicle2comprises a first lock member28and a second lock member30. The first lock member28includes the axially inward end26A. The second lock member30includes the axially outward end26B. The first lock member28is configured to detachably engage with the sprocket support body18of the rear hub assembly12in the mounting state. The second lock member30is configured to detachably engage with the first lock member28so as to abut against the first sprocket SP1in the axial direction D1in the mounting state. In the present embodiment, the first lock member28is a separate member from the second lock member30. However, the first lock member28can be integrally provided with the second lock member30as a one-piece unitary member if needed and/or desired. The first lock member28is configured to detachably engage with the axial end18B of the sprocket support body18in the mounting state. The first lock member28is configured to be at least partly provided in the second sprocket opening SP23in the mounting state. The second lock member30is configured to be at least partly provided in the first sprocket opening SP13and the second sprocket opening SP23in the mounting state. The term “detachable” or “detachably” as used herein, encompasses a configuration in which an element is repeatedly detachable from and attachable to another element without substantial damage. As seen inFIGS.7and8, the rear sprocket assembly10comprises at least one tooth-position maintaining member32. The at least one tooth-position maintaining member32is configured to maintain a relative position between the plurality of first sprocket teeth SP12and the plurality of second sprocket teeth SP22in a circumferential direction D2with respect to the rotational center axis A1. In the present embodiment, the rear sprocket assembly10comprises the tooth-position maintaining member32. However, the rear sprocket assembly10can comprise a plurality of tooth-position maintaining member32if needed and/or desired. The at least one tooth-position maintaining member32can be omitted from the rear sprocket assembly10if needed and/or desired. The at least one tooth-position maintaining member32includes a fixed portion34and at least one guide portion36. The fixed portion34is configured to be fixed to one of the first sprocket SP1and the second sprocket SP2. The at least one guide portion36is configured to engage with the other of the first sprocket SP1and the second sprocket SP2such that the other of the first sprocket SP1and the second sprocket SP2is slidable relative to the one of the first sprocket SP1and the second sprocket SP2in the axial direction D1. In the present embodiment, the tooth-position maintaining member32includes the fixed portion34and the at least one guide portion36. The fixed portion34is fixed to the first sprocket SP1. The fixed portion34is fixed to the first sprocket SP1in a press-fit manner. The fixed portion34has an annular shape. The fixed portion34includes an opening34A. However, the shape of the fixed portion is not limited to the annular shape. The at least one guide portion36is configured to engage with the second sprocket SP2such that the second sprocket SP2is slidable relative to the first sprocket SP1in the axial direction D1. However, the fixed portion34can be configured to be fixed to the first sprocket SP1if needed and/or desired. The fixed portion34can be fixed to the second sprocket SP2if needed and/or desired. The fixed portion34can be fixed to the one of the first sprocket SP1and the second sprocket SP2in a manner other than the press-fit manner. The at least one guide portion36can be configured to engage with the first sprocket SP1such that the first sprocket SP1is slidable relative to the second sprocket SP2in the axial direction D1if needed and/or desired. The at least one guide portion36includes a plurality of guide portions36. The at least one guide portion36includes a first guide portion36A, a second guide portion36B, and a third guide portion36C. The at least one guide portion36extends from the fixed portion34in the axial direction D1. The first guide portion36A, the second guide portion36B, and the third guide portion36C extend from the fixed portion34in the axial direction D1. The first guide portion36A, the second guide portion36B, and the third guide portion36C are spaced apart from each other in the circumferential direction D2. In the present embodiment, the at least one guide portion36includes the first guide portion36A, the second guide portion36B, the third guide portion36C, and no other guide portion which is configured to engage with the second sprocket SP2such that the second sprocket SP2is slidable relative to the first sprocket SP1in the axial direction D1. However, a total number of the at least one guide portion36is not limited to three. The first guide portion36A, the second guide portion36B, and the third guide portion36C are configured to engage with the second sprocket SP2such that the second sprocket SP2is slidable relative to the first sprocket SP1in the axial direction D1. However, the at least one guide portion36can be configured to engage with the first sprocket SP1such that the first sprocket SP1is slidable relative to the second sprocket SP2in the axial direction D1if needed and/or desired. As seen inFIGS.9and10, the second sprocket SP2includes at least one guide groove37. The at least one guide portion36is configured to be movably provided in the at least one guide groove37in the axial direction D1. In the present embodiment, the at least one guide groove37includes a first guide groove37A, a second guide groove37B, and a third guide groove37C. The first guide groove37A, the second guide groove37B, and the third guide groove37C are spaced apart from each other in the circumferential direction D2. The first guide portion36A is configured to be movably provided in the first guide groove37A in the axial direction D1. The second guide portion36B is configured to be movably provided in the second guide groove37B in the axial direction D1. The third guide portion36C is configured to be movably provided in the third guide groove37C in the axial direction D1. The at least one tooth-position maintaining member32may include a plurality of tooth-position maintaining members if needed and/or desired. In such embodiments, the tooth-position maintaining members are separate members from each other. Each of the tooth-position maintaining members includes the fixed portion34and the at least one guide portion36. Furthermore, the at least one tooth-position maintaining member32and the one of the first sprocket SP1and the second sprocket SP2may be integrally provided with each other as a one-piece unitary member if needed and/or desired. As seen inFIGS.11and12, the at least one guide portion36is disposed radially outwardly from the fixed portion34in the radial direction. The first guide portion36A is disposed radially outwardly from the fixed portion34in the radial direction. The second guide portion36B is disposed radially outwardly from the fixed portion34in the radial direction. The third guide portion36C is disposed radially outwardly from the fixed portion34in the radial direction. The at least one guide portion36extends in the axial direction D1. The first guide portion36A extends in the axial direction D1. The second guide portion36B extends in the axial direction D1. The third guide portion36C extends in the axial direction D1. The at least one guide portion36extends in the circumferential direction D2. The first guide portion36A extends in the circumferential direction D2. The second guide portion36B extends in the circumferential direction D2. The third guide portion36C extends in the circumferential direction D2. The at least one tooth-position maintaining member32includes at least one connecting portion38. The at least one connecting portion38connects the at least one guide portion36to the fixed portion34. The at least one connecting portion38extends in a direction that intersects with the rotational center axis A1. The at least one connecting portion38includes a plurality of connecting portions38. The plurality of connecting portions38includes a first connecting portion38A, a second connecting portion38B, and a third connecting portion38C. The first connecting portion38A connects the first guide portion36A to the fixed portion34. The second connecting portion38B connects the second guide portion36B to the fixed portion34. The third connecting portion38C connects the third guide portion36C to the fixed portion34. The first connecting portion38A extends from the fixed portion34to the first guide portion36A in the axial direction D1. The first connecting portion38A extends from the fixed portion34to the first guide portion36A in a first axial direction D11. The first axial direction D11is parallel to the axial direction D1. The first connecting portion38A extends radially outwardly from the fixed portion34to the first guide portion36A. The first guide portion36A extends from the first connecting portion38A in the first axial direction D11. The second connecting portion38B extends from the fixed portion34to the second guide portion36B in the axial direction D1. The second connecting portion38B extends from the fixed portion34to the second guide portion36B in the first axial direction D11. The second connecting portion38B extends radially outwardly from the fixed portion34to the second guide portion36B. The second guide portion36B extends from the second connecting portion38B in the first axial direction D11. The third connecting portion38C extends from the fixed portion34to the third guide portion36C in the axial direction D1. The third connecting portion38C extends from the fixed portion34to the third guide portion36C in the first axial direction D11. The third connecting portion38C extends radially outwardly from the fixed portion34to the third guide portion36C. The third guide portion36C extends from the third connecting portion38C in the first axial direction D11. The fixed portion34has a first axial length L11, a first radial length L12and a first circumferential length L13with respect to the rotational center axis A1. The first axial length L11is defined in the axial direction D1. The first radial length L12is defined in the radial direction. The first circumferential length L13is defined in the circumferential direction D2. In the present embodiment, the first circumferential length L13is larger than the first axial length L11and the first radial length L12. The first axial length L11is larger than the first radial length L12. However, the first circumferential length L13can be smaller than or equal to at least one of the first axial length L11and the first radial length L12if needed and/or desired. The first axial length L11can be smaller than or equal to the first radial length L12if needed and/or desired. The at least one guide portion36has a second axial length L21, a second radial length L22and a second circumferential length L23with respect to the rotational center axis A1. The second axial length L21is defined in the axial direction D1. The second radial length L22is defined in the radial direction. The second circumferential length L23is defined in the circumferential direction D2. In the present embodiment, the second circumferential length L23is larger than the second axial length L21and the second radial length L22. The second axial length L21is larger than the second radial length L22. The second axial length L21is equal to or larger than 2 mm. In the present embodiment, the second axial length L21is 3 mm. However, the second axial length L21is not limited to the above range and length. The second circumferential length L23can be smaller than or equal to at least one of the second axial length L21and the second radial length L22if needed and/or desired. The second axial length L21can be smaller than or equal to the second radial length L22if needed and/or desired. The tooth-position maintaining member32includes at least one protrusion40. The at least one protrusion40is configured to position the tooth-position maintaining member32relative to the first sprocket SP1when the tooth-position maintaining member32is attached to the first sprocket SP1. The at least one protrusion40is configured to restrict a relative rotation between the tooth-position maintaining member32and the first sprocket SP1in the circumferential direction D2in a state where the tooth-position maintaining member32is fixed to the first sprocket SP1. The at least one protrusion40includes a plurality of protrusions40. The plurality of protrusions40includes a first protrusion40A, a second protrusion40B, and a third protrusion40C. The first protrusion40A is provided in a circumferential position corresponding to a circumferential position of the first guide portion36A. The first protrusion40A protrudes from the fixed portion34in the axial direction D1. The first protrusion40A protrudes from the fixed portion34in a second axial direction D12which is an opposite direction of the first axial direction D11. The second axial direction D12is parallel to the first axial direction D11. The first protrusion40A can be offset from the first guide portion36A in the circumferential direction D2if needed and/or desired. The first protrusion40A can be omitted from the tooth-position maintaining member32if needed and/or desired. The second protrusion40B is provided in a circumferential position corresponding to a circumferential position of the second guide portion36B. The second protrusion40B protrudes from the fixed portion34in the axial direction D1. The second protrusion40B protrudes from the fixed portion34in the second axial direction D12. The second protrusion40B can be offset from the second guide portion36B in the circumferential direction D2if needed and/or desired. The second protrusion40B can be omitted from the tooth-position maintaining member32if needed and/or desired. The third protrusion40C is provided in a circumferential position corresponding to a circumferential position of the third guide portion36C. The third protrusion40C protrudes from the fixed portion34in the axial direction D1. The third protrusion40C protrudes from the fixed portion34in the second axial direction D12. The third protrusion40C can be offset from the third guide portion36C in the circumferential direction D2if needed and/or desired. The third protrusion40C can be omitted from the tooth-position maintaining member32if needed and/or desired. As seen inFIGS.9and10, the first sprocket SP1includes at least one positioning recess41. The at least one positioning recess41includes a plurality of positioning recesses41. The plurality of positioning recesses41includes a first positioning recess41A, a second positioning recess41B, and a third positioning recess41C. The first protrusion40A is configured to be provided in the first positioning recess41A in the state where the tooth-position maintaining member32is fixed to the first sprocket SP1. The second protrusion40B is configured to be provided in the second positioning recess41B in the state where the tooth-position maintaining member32is fixed to the first sprocket SP1. The third protrusion40C is configured to be provided in the third positioning recess41C in the state where the tooth-position maintaining member32is fixed to the third sprocket SP1. As seen inFIG.13, the first guide portion36A, the second guide portion36B and the third guide portion36C form an isosceles triangle when viewed from the axial direction D1. The first guide portion36A, the second guide portion36B and the third guide portion36C can be circumferential arranged at constant or different intervals if needed and/or desired. A first circumferential center plane36A1is defined to bisect the first circumferential length L12of the first guide portion36A as viewed along the rotational center axis A1. The first circumferential center plane36A1radially outwardly extends from the rotational center axis A1to bisect the first circumferential length L12as viewed along the rotational center axis A1. A second circumferential center plane36B1is defined to bisect the second circumferential length L22of the second guide portion36B as viewed along the rotational center axis A1. The second circumferential center plane36B1radially outwardly extends from the rotational center axis A1to bisect the second circumferential length L22as viewed along the rotational center axis A1. A third circumferential center plane36C1is defined to bisect the third circumferential length L32of the third guide portion36C as viewed along the rotational center axis A1. The third circumferential center plane36C1radially outwardly extends from the rotational center axis A1to bisect the third circumferential length L32as viewed along the rotational center axis A1. A first circumferential angle AG1is defined between the first circumferential center plane36A1and the second circumferential center plane36B1in the circumferential direction D2. A second circumferential angle AG2is defined between the second circumferential center plane36B1and the third circumferential center plane36C1in the circumferential direction D2. A third circumferential angle AG3is defined between the first circumferential center plane36A1and the third circumferential center plane36C1in the circumferential direction D2. The first circumferential angle AG1is equal to the third circumferential angle AG3. The second circumferential angle AG2is different from the first circumferential angle AG1and the third circumferential angle AG3. The second circumferential angle AG2is smaller than the first circumferential angle AG1and the third circumferential angle AG3. However, the second circumferential angle AG2can be larger than or equal to at least one of the first circumferential angle AG1and the third circumferential angle AG3if needed and/or desired. The first circumferential angle AG1can be different from the third circumferential angle AG3if needed and/or desired. The second circumferential angle AG2is different from the first circumferential angle AG1and the third circumferential angle AG3. Thus, the first guide portion36A, the second guide portion36B, and the third guide portion36C define a single circumferential position of the second sprocket SP2relative to the first sprocket SP1in a state where the tooth-position maintaining member32is fixed to the first sprocket SP1and a state where the second sprocket SP2is engaged with the first guide portion36A, the second guide portion36B, and the third guide portion36C. The second sprocket body SP21of the second sprocket SP2has at least one circumferential abutment surface42. The at least one circumferential abutment surface42is configured to abut against the at least one guide portion36to maintain the relative position between the plurality of first sprocket teeth SP12and the plurality of second sprocket teeth SP22in the circumferential direction D2. The second sprocket body SP21has a plurality of first circumferential abutment surfaces42A configured to abut against the first guide portion36A to maintain the relative position between the plurality of first sprocket teeth SP12and the plurality of second sprocket teeth SP22in the circumferential direction D2. The first guide portion36A is provided between the first circumferential abutment surfaces42A in the circumferential direction D2in the mounting state. The first circumferential abutment surfaces42A define the first guide groove37A. The second sprocket body SP21has a plurality of second circumferential abutment surfaces42B configured to abut against the second guide portion36B to maintain the relative position between the plurality of first sprocket teeth SP12and the plurality of second sprocket teeth SP22in the circumferential direction D2. The second guide portion36B is provided between the second circumferential abutment surfaces42B in the circumferential direction D2in the mounting state. The second circumferential abutment surfaces42B define the second guide groove37B. The second sprocket body SP21has a plurality of third circumferential abutment surfaces42C configured to abut against the third guide portion36C to maintain the relative position between the plurality of first sprocket teeth SP12and the plurality of second sprocket teeth SP22in the circumferential direction D2. The third guide portion36C is provided between the third circumferential abutment surfaces42C in the circumferential direction D2in the mounting state. The third circumferential abutment surfaces42C define the third guide groove37C. As seen inFIG.14, the first lock member28includes a first axial end28A and a second axial end28B. The second axial end28B is opposite to the first axial end28A in the axial direction D1with respect to the rotational center axis A1of the plurality of rear sprockets SP. The first axial end28A is configured to be detachably attached to the sprocket support body18of the rear hub assembly12in a mounting state where the plurality of rear sprockets SP is mounted to the rear hub assembly12. The first axial end28A has first external threads28D. The second axial end28B has first internal threads28E. The axially inward end26A has the first external threads28D. The first external threads28D can also be referred to as first threads28D. The first internal threads28E can also be referred to as second threads28E. Thus, the first axial end28A has the first threads28D. The second axial end28B has the second threads28E. The axially inward end26A has the first threads28D. The first lock member28includes a first surface28C. The first surface28C radially outwardly faces in the radial direction with respect to the rotational center axis A1. The first surface28C is adjacent to the first external threads28D. The first surface28C extends from the first external threads28D in the axial direction D1. The first internal threads28E are provided radially inwardly of the first surface28C. The first surface28C is adjacent to the first threads28D. The first threads28D of the first lock member28extend radially outwardly from the first surface28C in the radial direction. The first external threads28D of the first lock member28extend radially outwardly from the first surface28C in the radial direction. The first external threads28D of the first lock member28are configured to engage with internal threads18D provided to the sprocket support body18of the rear hub assembly12in the mounting state. The internal threads18D is provided to the axial end18B of the sprocket support body18. The internal threads18D can also be referred to as threads18D. Thus, the first threads28D are configured to threadedly engage with the threads18D provided to the sprocket support body18of the rear hub assembly12in the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. The second lock member30includes a third axial end30A and a fourth axial end30B. The fourth axial end30B is opposite to the third axial end30A in the axial direction D1. The third axial end30A is configured to be attached to the second axial end28B of the first lock member28in an assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The third axial end30A is configured to be attached to the second axial end28B of the first lock member28in the assembled state where the smallest sprocket SP1and the lock device26are assembled as one unit. The third axial end30A is configured to be attached to the second axial end28B of the first lock member28in an assembled state where the lock device26, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. The third axial end30A of the second lock member30is configured to be detachably attached to the second axial end28B of the first lock member28in the assembled state where the first or smallest sprocket SP1and the lock device26are assembled as one unit. The third axial end30A of the second lock member30is configured to be detachably attached to the second axial end28B of the first lock member28in the assembled state where the lock device26, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. The third axial end30A has second external threads30D. The fourth axial end30B has at least one radial projection30F. Namely, the axially outward end26B has the at least one radial projection30F. The second external threads30D can also be referred to as third threads30D. Thus, the third axial end30A has the third threads30D. The second lock member30includes a second surface30C. The second surface30C radially outwardly faces in the radial direction. The second surface30C is adjacent to the second external threads30D and the at least one radial projection30F. The second surface30C is adjacent to the third threads30D. The second surface30C is adjacent to the at least one radial projection30F in the axial direction D1. The second surface30C is disposed between the third threads30D and the at least one radial projection30F. The first surface28C of the first lock member28is disposed radially outwardly from the second surface30C of the second lock member30in the radial direction with respect to the rotational center axis A1in the assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The first radially minimum portion SP19of the first sprocket SP1is disposed radially outwardly of the second surface30C in the assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The first internal threads28E of the first lock member28are configured to engage with the second external threads30D of the second lock member30. In other words, the third threads30D are configured to threadedly engage with the second threads28E of the first lock member28in the assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The at least one radial projection30F of the second lock member30extends radially outwardly from the second surface30C in the radial direction. The at least one radial projection30F of the second lock member30is configured to abut against the smallest sprocket SP1of the plurality of rear sprockets SP in the axial direction D1in the mounting state where the plurality of rear sprockets SP is mounted to the rear hub assembly12. The at least one radial projection30F of the second lock member30is configured to abut against the smallest sprocket SP1of the plurality of rear sprockets SP in the axial direction D1in the mounting state where the plurality of rear sprockets SP and the lock device26are mounted to the rear hub assembly12. Namely, the at least one radial projection30F of the second lock member30is configured to abut against the first sprocket SP1in the axial direction D1in the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. The at least one radial projection30F has a flange shape. However, the at least one radial projection30F may include a plurality of radial projections if needed and/or desired. The at least one radial projection30F may have shapes other than the flange shape if needed and/or desired. As seen inFIG.8, the first axial end28A of the first lock member28includes a first tool engagement profile28G. In the present embodiment, the first tool engagement profile28G includes a plurality of first tool engagement recesses28G1. The first tool engagement recesses28G1are circumferentially arranged at constant intervals. However, the structure of the first tool engagement profile28G is not limited to the first tool engagement recesses28G1. As seen inFIG.7, the fourth axial end30B of the second lock member30includes a second tool engagement profile30G. In the present embodiment, the at least one radial projection30F includes the second tool engagement profile30G. The second tool engagement profile30G includes a plurality of second tool engagement recesses30G1. The second tool engagement recesses30G1are circumferential arranged at constant intervals. However, the structure of the second tool engagement profile30G is not limited to the second tool engagement recesses30G1. The first tool engagement profile28G is configured to be engaged with a first tool. The second tool engagement profile30G is configured to be engaged with a second tool. The first lock member28and the second lock member30are rotated relative to each other using the first tool and the second tool in a state where the first tool is engaged with the first tool engagement profile28G and the second tool is engaged with the second tool engagement profile30G. Thus, the second external threads30D of the second lock member30is screwed into the first internal threads28E of the first lock member28. As seen inFIG.14, the lock device26is configured to dispose the first sprocket SP1between the first threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The lock device26is configured to dispose the first sprocket SP1and the second sprocket SP2between the first external threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state where the lock device26, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. The first sprocket SP1and the second sprocket SP2are configured to be disposed between the at least one radial projection30F of the second lock member30and the sprocket support body18of the rear hub assembly12in the axial direction D1in the mounting state. The first lock member28and the second lock member30are configured to dispose at least two sprockets of the plurality of rear sprockets SP between the first external threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state where the first lock member28, the second lock member30, and the at least two sprockets of the plurality of rear sprockets SP are assembled as one unit. The at least two sprockets of the plurality of rear sprockets SP include the smallest sprocket SP1and a largest sprocket among the at least two sprockets. In the present embodiment, the first lock member28and the second lock member30are configured to dispose the first sprocket SP1and the second sprocket SP2between the first external threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state where the first lock member28, the second lock member30, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. Thus, the at least two sprockets include the first sprocket SP1and the second sprocket SP2. The first sprocket SP1can also be referred to as the smallest sprocket SP1among the at least two sprockets. The second sprocket SP2can also be referred to as the largest sprocket SP2among the at least two sprockets. However, the at least two sprockets of the plurality of rear sprockets SP can include other sprockets of the plurality of rear sprockets SP if needed and/or desired. The first sprocket opening SP13can also be referred to as a smallest-sprocket opening SP13. The first diameter DM11of the first sprocket opening SP13can also be referred to as a smallest-sprocket diameter DM11. Thus, the smallest sprocket SP1includes the smallest-sprocket opening SP13having the smallest-sprocket diameter DM11. The second sprocket opening SP23can also be referred to as a largest-sprocket opening SP23. The second diameter DM21of the second sprocket opening SP23can also be referred to as a largest-sprocket diameter DM21. The largest sprocket SP2includes the largest-sprocket opening SP23having the largest-sprocket diameter DM21. The at least one radial projection30F has a radially outer diameter DM4. The first external threads28D has a major diameter DM5. The radially outer diameter DM4of the at least one radial projection30F can also be referred to as a radially maximum projection diameter DM4. The major diameter DM5of the first external threads28D can also be referred to as a first radially maximum thread diameter DM5. Thus, the at least one radial projection30F has the radially maximum projection diameter DM4. The first threads28D have the first radially maximum thread diameter DM5. The radially outer diameter DM4of the at least one radial projection30F is larger than the first diameter DM11of the first sprocket opening SP13. The major diameter DM5of the first external threads28D is larger than the second diameter DM21of the second sprocket opening SP23. Namely, the radially outer diameter DM4of the at least one radial projection30F is larger than the smallest-sprocket diameter DM11. The major diameter DM5of the first external threads28D is larger than the largest-sprocket diameter DM21. The first radially minimum diameter DM11of the first sprocket opening SP13is smaller than each of the first radially maximum thread diameter DM5of the first threads28D and the radially maximum projection diameter DM4of the at least one radial projection30F. The first lock member28has an axial contact surface28F disposed radially inwardly from the first surface28C. The axial contact surface28F is configured to contact the third axial end30A of the second lock member30in the assembled state where the first sprocket SP1and the lock device26are assembled as one unit. The axial contact surface28F is configured to contact the third axial end30A of the second lock member30in the assembled state where the lock device26, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. The axial contact surface28F is configured to contact the third axial end30A of the second lock member30in the assembled state where the first lock member28, the second lock member30, and the at least two sprockets of the plurality of rear sprockets SP are assembled as one unit. The axial contact surface28F is configured to contact the third axial end30A of the second lock member30in the assembled state where the first lock member28, the second lock member30, the first sprocket SP1, and the second sprocket SP2are assembled as one unit. The first sprocket SP1has a first axially outward surface SP14and a first axially inward surface SP15. The first axially outward surface SP14and the first axially inward surface SP15face toward opposite directions to each other in the axial direction D1. The first axially inward surface SP15is configured to face toward an axial center plane CP of the human-powered vehicle2in the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. The second sprocket SP2has a second axially outward surface SP24and a second axially inward surface SP25. The second axially outward surface SP24and the second axially inward surface SP25face toward opposite directions to each other in the axial direction D1. The second axially inward surface SP25is configured to face toward the axial center plane CP of the human-powered vehicle2in the mounting state. As seen inFIGS.9and10, the first sprocket SP1includes a first axially inwardly torque transmitting profile SP16provided to the first axially inward surface SP15. The second sprocket SP2includes a second axially outwardly torque transmitting profile SP26provided to the second axially outward surface SP24. The first axially inwardly torque transmitting profile SP16is configured to engage with the second axially outwardly torque transmitting profile SP26in a torque-transmitting manner. The first axially inwardly torque transmitting profile SP16is configured to, in a torque-transmitting manner, engage with the second axially outwardly torque transmitting profile SP26of the second sprocket SP2adjacent to the first sprocket SP1without another sprocket between the first sprocket SP1and the second sprocket SP2in the axial direction D1in the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. As seen inFIG.9, the first axially inwardly torque transmitting profile SP16includes a plurality of first teeth SP16A. The plurality of first teeth SP16A includes a plurality of first teeth SP16A1and a first tooth SP16A2. The first tooth SP16A2has a shape and/or size which is different from a shape and/or size of the plurality of first teeth SP16A1. In the present embodiment, the first tooth SP16A2has a circumferential width which is larger than a circumferential width of the first tooth SP16A1. As seen inFIG.10, the second axially outwardly torque transmitting profile SP26includes a plurality of second recesses SP26A. The plurality of second recesses SP26A includes a plurality of second recesses SP26A1and a second recess SP26A2. The second recess SP26A2has a shape and/or size different from a shape and/or size of the plurality of second recesses SP26A1. In the present embodiment, the second recess SP26A2has a circumferential width which is larger than a circumferential width of the second recess SP26A1. As seen inFIGS.9and10, the first teeth SP16A of the first sprocket SP1are configured to respectively engage with the second recesses SP26A of the second sprocket SP2in a torque transmitting manner. In the present embodiment, the first teeth SP16A1of the first sprocket SP1are configured to respectively engage with the second recesses SP26A1of the second sprocket SP2. The first tooth SP16A2of the first sprocket SP1is configured to engage with the second recess SP26A2of the second sprocket SP2. The first tooth SP16A2is configured not to engage with the second recess SP26A1since the circumferential width of the first tooth SP16A2is larger than the circumferential width of the second recess SP26A1. Thus, the first tooth SP16A2and the second recess SP26A2define a single circumferential position of the first sprocket SP1relative to the second sprocket SP2. As seen inFIGS.9and15, the second sprocket SP2includes a second axially inwardly torque transmitting profile SP27provided to the second axially inward surface SP25. The second axially inwardly torque transmitting profile SP27is configured to engage with one of a torque transmitting profile provided to the third sprocket SP3and a torque transmitting profile provided to the sprocket support body18of the rear hub assembly12in a torque-transmitting manner. In the present embodiment, as seen inFIG.15, the second axially inwardly torque transmitting profile SP27is configured to engage with a torque transmitting profile SP37provided to the third sprocket SP3in a torque-transmitting manner. However, the second axially inwardly torque transmitting profile SP27can be configured to engage with a torque transmitting profile provided to the sprocket support body18in a torque-transmitting manner if needed and/or desired. As seen inFIG.9, the second axially inwardly torque transmitting profile SP27includes a plurality of second additional teeth SP27A. The plurality of second additional teeth SP27A includes a plurality of second additional teeth SP27A1and a second additional tooth SP27A2. The second additional tooth SP27A2that has a different shape and/or size from the other of the plurality of second additional teeth SP27A1. In the present embodiment, the second additional tooth SP27A2has a circumferential width which is larger than a circumferential width of the second additional tooth SP27A1. As seen inFIG.15, the torque transmitting profile SP37includes a plurality of third recesses SP37A. The plurality of third recesses SP37A includes a plurality of third recesses SP37A1and a third recess SP37A2. The third recess SP37A2has a shape and/or size different from a shape and/or size of the plurality of third recesses SP37A1. In the present embodiment, the third recess SP37A2has a circumferential width which is larger than a circumferential width of the third recess SP37A1. As seen inFIGS.9and15, the second teeth SP27A of the second sprocket SP2are configured to respectively engage with the third recesses SP37A of the third sprocket SP3. In the present embodiment, the second teeth SP27A1of the second sprocket SP2are configured to respectively engage with the third recesses SP37A1of the third sprocket SP3. The second additional tooth SP27A2of the second sprocket SP2is configured to engage with the third recess SP37A2of the third sprocket SP3. The second additional tooth SP27A2is configured not to engage with the third recess SP37A1since the circumferential width of the second additional tooth SP27A2is larger than the circumferential width of the third recess SP37A1. Thus, the second additional tooth SP27A2and the third recess SP37A2define the rotational position of the second sprocket SP2relative to the third sprocket SP3. As seen inFIGS.6and15, the third sprocket SP3includes an additional torque transmitting profile SP38. The additional torque transmitting profile SP38is configured to engage with the plurality of external spline teeth18A of the sprocket support body18in a torque transmitting manner in the present embodiment. The additional torque transmitting profile SP38and the plurality of external spline teeth18A define a single circumferential position of the third sprocket SP3relative to the sprocket support body18. Rotational force is transmitted from the first sprocket SP1to the sprocket support body18via the second sprocket SP2and the third sprocket SP3. Rotational force is transmitted from the second sprocket SP2to the sprocket support body18via the third sprocket SP3. The assembly procedure in which the first sprocket SP1, the second sprocket SP2, the lock device26, and the tooth-position maintaining member32are assembled to the rear hub assembly12will be described below referringFIGS.7to10and16to18. As seen inFIG.16, the first sprocket SP1, the second sprocket SP2, the lock device26, and the tooth-position maintaining member32are assembled as a lock device assembly50before the first sprocket SP1and the second sprocket SP2are assembled to the rear hub assembly12. The lock device assembly50includes the first sprocket SP1, the second sprocket SP2, the lock device26, and the tooth-position maintaining member32. The lock device26is configured so that the first sprocket SP1is slidable relative to the lock device26in the axial direction D1in the assembled state and before the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. The lock device26is configured so that the adjacent sprocket SP2is slidable in the axial direction D1relative to the lock device26in a space provided radially outwardly of the first surface28C and the second surface30C in the assembled state and before the mounting state where the plurality of rear sprockets SP is mounted to the rear hub assembly12. In the present embodiment, the first sprocket SP1is slidable between the first threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state where the rear sprocket assembly10is mounted to the rear hub assembly12. The first sprocket SP1is slidable between the first threads28D of the first lock member28and the at least one radial projection30F of the second lock member30in the axial direction D1in the assembled state before the mounting state where the rear sprocket assembly10is mounted to the rear hub assembly12. However, the lock device26can be configured so that the smallest sprocket SP1is static relative to the lock device26in the axial direction D1in the assembled state where the plurality of rear sprockets SP is mounted to the rear hub assembly12if needed and/or desired. The lock device26can be configured so that the smallest sprocket SP1is static relative to the lock device26in the axial direction D1in the assembled state before the mounting state where the plurality of rear sprockets SP is mounted to the rear hub assembly12if needed and/or desired. In such embodiments, the lock device26is configured to restrict an axial movement of the smallest sprocket SP1relative to the lock device26in both the first axial direction D11and the second axial direction D12in the assembled state before the mounting state. The at least one radial projection30F of the second lock member30is in contact with the first sprocket SP1to restrict an axial movement of the first sprocket SP1and the tooth-position maintaining member32relative to the lock device26in the second axial direction D12. The second axial end28B of the first lock member28is in contact with the tooth-position maintaining member32to restrict an axial movement of the first sprocket SP1and the tooth-position maintaining member32relative to the lock device26in the first axial direction D11. As seen inFIGS.9and10, for example, the fixed portion34of the tooth-position maintaining member32is inserted into the first sprocket opening SP13of the first sprocket SP1. At this time, the first protrusion40A, the second protrusion40B, and the third protrusion40C are inserted into the first positioning recess41A, the second positioning recess41B, and the third positioning recess41C. Thus, the tooth-position maintaining member32is fixed to the first sprocket SP1in the single circumferential position. The first guide portion36A, the second guide portion36B, and the third guide portion36C are inserted into the first guide groove37A, the second guide groove37B, and the third guide groove37C of the second sprocket SP2. Thus, the second sprocket SP2is assembled to the first sprocket SP1via the tooth-position maintaining member32in the single circumferential position relative to the first sprocket SP1. As seen inFIG.16, the third axial end30A of the second lock member30is inserted into the first sprocket opening SP13, the opening34A of the tooth-position maintaining member32, and the second sprocket opening SP23. The second external threads30D of the second lock member30is screwed into the first internal threads28E of the first lock member28. The first lock member28and the second lock member30are rotated relative to each other using the first tool and the second tool until the third axial end30A comes into contact with the axial contact surface28F of the first lock member28. Thus, the first sprocket SP1, the second sprocket SP2, the lock device26, and the tooth-position maintaining member32are assembled as the lock device assembly50. As seen inFIG.17, the first axial end28A of the first lock member28comes into contact with the axial end18B of the sprocket support body18when the lock device assembly50is assembled to the sprocket support body18. As seen inFIG.18, the second sprocket SP2is moved toward the third sprocket SP3to bring the second axially inwardly torque transmitting profile SP27into engagement with the torque transmitting profile SP37of the third sprocket SP3. The second sprocket SP2is rotated relative to the sprocket support body18about the rotational center axis A1to adjust the rotational position of the second axially inwardly torque transmitting profile SP27relative to the torque transmitting profile SP37of the third sprocket SP3, specifically such that the second additional tooth SP27A2of the second sprocket SP2engages with the third recess SP37A2of the third sprocket SP3. Since the tooth-position maintaining member32is configured to couple the first sprocket SP1and the second sprocket SP2such that the second sprocket SP2is slidable relative to the first sprocket SP1in the axial direction D1, the first sprocket SP1and the tooth-position maintaining member32are rotated relative to the sprocket support body18about the rotational center axis A1along with the second sprocket SP2in response to the rotation of the second sprocket SP2. Thus, the first sprocket SP1, the second sprocket SP2, and the third sprocket SP3are positioned relative to each other in the predetermined rotational positions in a state where the second axially inwardly torque transmitting profile SP27is engaged with the torque transmitting profile SP37of the third sprocket SP3. As seen inFIGS.14and17, the lock device26is rotated relative to the sprocket support body18about the rotational center axis A1using the second tool such that the first external threads28D of the first lock member28is screwed into the internal threads18D of the sprocket support body18. The rotational position between the second sprocket SP2and the third sprocket SP3is maintained relative to the sprocket support body18while the lock device26is rotated relative to the sprocket support body18since the second axially inwardly torque transmitting profile SP27of the second sprocket SP2is engaged with the torque transmitting profile SP37of the third sprocket SP3. The rotational position between the first sprocket SP1and the second sprocket SP2is maintained relative to the sprocket support body18while the lock device26is rotated relative to the sprocket support body18since the guide portions36of the tooth-position maintaining member32are engaged with the guide grooves37of the second sprocket SP2. Thus, the first axially inwardly torque transmitting profile SP16smoothly comes into engagement with the second axially outwardly torque transmitting profile SP26when the lock device26is tightened using the second tool. The first sprocket SP1and the second sprocket SP2are held between the radial projection30F and the third sprocket SP3in the axial direction D1when the lock device26is tightened using the tool. Thus, the first sprocket SP1and the second sprocket SP2are mounted to the sprocket support body18of the rear hub assembly12using the lock device26and the tooth-position maintaining member32. The structures of the first sprocket SP1, the second sprocket SP2, and the tooth-position maintaining member32can be applied to other rear sprocket assemblies. For example, the structures of the first sprocket SP1, the second sprocket SP2, and the tooth-position maintaining member32can be applied to rear sprocket assemblies210,310, and410illustrated inFIGS.19to21. The rear sprocket assemblies210,310, and410illustrated inFIGS.19to21have substantially the same structure as the structure of the rear sprocket assembly10. The sprocket carrier22of the embodiment is omitted from the rear sprocket assemblies210,310, and410. As seen inFIG.19, the sprockets SP of the rear sprocket assembly210includes first to eleventh sprockets SP201to SP211. The seventh to ninth sprockets SP207to SP209are secured to each other with fasteners225. The rear sprocket assembly210includes spacers SS21and SS22. The spacer SS21is provided between the seventh sprocket SP207and the eighth sprocket SP208. The spacer SS22is provided between the eighth sprocket SP208and the ninth sprocket SP209. The seventh to ninth sprockets SP207to SP209and the spacers SS21and SS22are secured to each other with the fasteners225. The spacer SS21includes a ring SS21A and a plurality of arms SS21B extending radially outwardly from the ring SS21A. The arms SS21B are circumferentially arranged. The spacer SS22includes a ring SS22A and a plurality of arms SS22B extending radially outwardly from the ring SS22A. The arms SS22B are circumferentially arranged. The seventh to ninth sprockets SP207to SP209and the arms SS21B and SS22B are secured to each other with the fasteners225. The eighth to tenth sprockets SP208to SP210are secured to each other with fasteners227. The spacer SS22is provided between the eighth sprocket SP208and the ninth sprocket SP209. The eighth to tenth sprockets SP208to SP210and the arms SS22B are secured to each other with the fasteners227. Each of the fasteners227includes a spacer227A. The spacers227A of the fasteners227are provided between the ninth sprocket SP209and the tenth sprocket SP210. The ninth and tenth sprockets SP209and SP210are secured to each other with fasteners229. Each of the fasteners229includes a spacer229A. The spacers229A of the fasteners229are provided between the ninth sprocket SP209and the tenth sprocket SP210. The tenth and eleventh sprockets SP210and SP211are secured to each other with fasteners231. Each of the fasteners231includes a spacer231A. The spacers231A of the fasteners231are provided between the tenth sprocket SP210and the eleventh sprocket SP211. Thus, the seventh to eleventh sprockets SP207to211are integrally coupled with the fasteners225,227,229, and231. As seen inFIG.20, the rear sprocket assembly310has substantially the same structure as the structure of the rear sprocket assembly210. The sprockets SP of the rear sprocket assembly310includes the first to eighth sprockets SP201to SP208and ninth to eleventh sprockets SP309to SP311. The sprockets SP309to SP311has substantially the same structure as the structure of the sprockets SP209to SP211. The seventh, eighth, and ninth sprockets SP207, SP208, and SP309are secured to each other with the fasteners225. The spacer SS21is provided between the seventh sprocket SP207and the eighth sprocket SP208. The spacer SS22is provided between the eighth sprocket SP208and the ninth sprocket SP309. The seventh, eighth, and ninth sprockets SP207, SP208, and SP309and the spacers SS21and SS22are secured to each other with the fasteners225. The seventh, eighth, and ninth sprockets SP207, SP208, and SP309and the arms SS21B and SS22B are secured to each other with the fasteners225. The eighth and ninth sprockets SP208and SP309are secured to each other with fasteners327. The spacer SS22is provided between the eighth sprocket SP208and the ninth sprocket SP309. The eighth and ninth sprockets SP208and SP309and the arms SS22B are secured to each other with the fasteners327. The ninth and tenth sprockets SP309and SP310are secured to each other with the fasteners229. The spacers229A of the fasteners229are provided between the ninth sprocket SP309and the tenth sprocket SP310. The tenth and eleventh sprockets SP310and SP311are secured to each other with the fasteners231. The spacers231A of the fasteners231are provided between the tenth sprocket SP310and the eleventh sprocket SP311. Thus, the seventh to eleventh sprockets SP207to311are integrally coupled with the fasteners225,327,229, and231. As seen inFIG.21, the sprocket SP311and the fasteners231of the rear sprocket assembly310are omitted from the rear sprocket assembly410. The seventh to tenth sprockets SP207to310are integrally coupled with the fasteners225,327, and229. In the above embodiment depicted inFIGS.6to8, the rear sprocket assembly10includes the tooth-position maintaining member32. As seen inFIGS.22to25, however, the tooth-position maintaining member32can be omitted from the rear sprocket assembly10if needed and/or desired. As seen inFIG.22, a rear sprocket assembly510in accordance with a modification of the present embodiment is configured to be mounted to the rear hub assembly12for the human-powered vehicle2. In the rear sprocket assembly510, the plurality of rear sprockets SP includes the first, second, and fourth to thirteenth sprockets SP1, SP502, and SP4to SP13. Namely, the rear sprocket assembly510comprises the first sprocket SP1. The rear sprocket assembly510further comprises the second sprocket SP502. The third sprocket SP3is omitted from the plurality of sprockets SP. The thirteenth sprocket SP13is added to the plurality of sprockets SP. However, the total number of the plurality of rear sprockets SP is not limited to twelve. The second sprocket SP502has substantially the same structure as the structure of the third sprocket SP3of the rear sprocket assembly10. The second sprocket SP502has a second sprocket outer diameter DM502larger than the first sprocket outer diameter DM1of the first sprocket SP1. The second sprocket SP502is adjacent to the first or smallest sprocket SP1without another sprocket between the adjacent sprocket and the first or smallest sprocket SP1in the axial direction D1. The second sprocket SP502can also be referred to as an adjacent sprocket SP502. Namely, in the rear sprocket assembly510, the plurality of rear sprockets SP includes the adjacent sprocket SP502. The adjacent sprocket SP502is adjacent to the smallest sprocket SP1without another sprocket between the adjacent sprocket SP502and the smallest sprocket SP1in the axial direction D1. As seen inFIG.23, the rear sprocket assembly510comprises a lock device526. The lock device526is configured to fix the rear sprocket assembly510to the sprocket support body18of the rear hub assembly12in a mounting state where the rear sprocket assembly510is mounted to the rear hub assembly12. The lock device526is configured to mount the first sprocket SP1to the rear hub assembly12. The lock device526has substantially the same structure as the structure of the lock device26of the rear sprocket assembly10. The lock device526includes the axially inward end26A and the axially outward end26B. The axially outward end26B is opposite to the axially inward end26A in the axial direction D1. As seen inFIG.22, the lock device526is configured to be attached to the sprocket support body18to hold the sprocket carrier22and the first, second, fourth, and fifth sprockets SP1, SP502, SP4, and SP5between the lock device526and the positioning surface18C of the sprocket support body18in the axial direction D1. As seen inFIG.24, the second sprocket SP502includes the second sprocket body SP21, the plurality of second sprocket teeth SP22, and the second sprocket opening SP23. The second sprocket SP502has the second axially outward surface SP24and the second axially inward surface SP25. The second axially outward surface SP24and the second axially inward surface SP25face toward opposite directions to each other in the axial direction D1. The second axially inward surface SP25is configured to face toward the axial center plane CP of the human-powered vehicle2in the mounting state. The second sprocket SP502includes the second axially outwardly torque transmitting profile SP26. The first axially inwardly torque transmitting profile SP16is configured to, in a torque-transmitting manner, engage with the second axially outwardly torque transmitting profile SP26of the second sprocket SP502adjacent to the first sprocket SP1without another sprocket between the first sprocket SP1and the second sprocket SP502in the axial direction D1in the mounting state where the rear sprocket assembly510is mounted to the rear hub assembly12. The second sprocket SP502includes an additional torque transmitting profile SP527. The additional torque transmitting profile SP527has substantially the same structure as the structure of the additional torque transmitting profile SP38of the third sprocket SP3of the first embodiment. The additional torque transmitting profile SP527is configured to engage with the plurality of external spline teeth18A of the sprocket support body18in a torque transmitting manner. The additional torque transmitting profile SP527and the plurality of external spline teeth18A define a single circumferential position of the second sprocket SP502relative to the sprocket support body18. Rotational force is transmitted from the first sprocket SP1to the sprocket support body18via the second sprocket SP502. As seen inFIG.23, the lock device526for mounting the plurality of rear sprockets SP to the rear hub assembly12for the human-powered vehicle2comprises a first lock member528and a second lock member530. The first lock member528includes the axially inward end26A. The second lock member530includes the axially outward end26B. The first lock member528has substantially the same structure as the structure of the first lock member28of the first embodiment. The second lock member530has substantially the same structure as the structure of the second lock member30of the first embodiment. The first lock member528is configured to detachably engage with the sprocket support body18of the rear hub assembly12in the mounting state. The second lock member530is configured to detachably engage with the first lock member528so as to abut against the first sprocket SP1in the axial direction D1in the mounting state. In this modification, the first lock member528is a separate member from the second lock member530. However, the first lock member528can be integrally provided with the second lock member530as a one-piece unitary member if needed and/or desired. The first lock member528is configured to detachably engage with the axial end18B of the sprocket support body18in the mounting state. The first lock member528is configured to be at least partly provided in the second sprocket opening SP23in the mounting state. The second lock member530is configured to be at least partly provided in the first sprocket opening SP13and the second sprocket opening SP23in the mounting state. As seen inFIG.23, the first lock member528includes the first axial end28A and the second axial end28B. The first lock member528includes the first surface28C. The first axial end28A has the first threads28D. The second axial end28B has the second threads28E. The axially inward end26A has the first threads28D. The first lock member528has the axial contact surface28F. The second lock member530includes the third axial end30A and the fourth axial end30B. The second lock member530includes the second surface30C. The third axial end30A has the third threads30D. The fourth axial end30B has at least one radial projection30F. The axially outward end26B has the at least one radial projection30F. The lock device526is configured to dispose the first sprocket SP1between the first threads28D of the first lock member528and the at least one radial projection30F of the second lock member530in the axial direction D1in an assembled state where the first sprocket SP1and the lock device526are assembled as one unit. In the rear sprocket assembly10illustrated inFIG.6, the major diameter DM5of the first threads28D is larger than the second diameter DM21of the second sprocket opening SP23. In the rear sprocket assembly510, however, the major diameter DM5of the first threads28D is smaller than the second diameter DM21of the second sprocket opening SP23. Thus, the first lock member528can be inserted into the second sprocket opening SP23of the second sprocket SP502after the first lock member528and the second lock member530are assembled as one unit. As seen inFIG.25, the first sprocket SP1and the lock device526are assembled as a lock device assembly550before the first sprocket SP1is assembled to the rear hub assembly12. The lock device assembly550includes the first sprocket SP1and the lock device526. The lock device assembly550can include the second sprocket SP502if needed and/or desired. The lock device526is configured so that the first sprocket SP1is slidable relative to the lock device526in the axial direction D1in the assembled state and before the mounting state where the rear sprocket assembly510is mounted to the rear hub assembly12. The lock device526is configured so that the adjacent sprocket SP502is slidable relative to the lock device526above the first surface28C and the second surface30C in the axial direction D1in the assembled state and before a mounting state where the plurality of rear sprockets SP is mounted to the rear hub assembly12. In this modification, the first sprocket SP1is slidable between the first threads28D of the first lock member528and the at least one radial projection30F of the second lock member530in the axial direction D1in the assembled state where the first sprocket SP1and the lock device526are assembled as one unit. The first sprocket SP1is slidable between the first threads28D of the first lock member528and the at least one radial projection30F of the second lock member530in the axial direction D1in the assembled state before the mounting state where the rear sprocket assembly510is mounted to the rear hub assembly12. However, the lock device526can be configured so that the smallest sprocket SP1is static relative to the lock device526in the axial direction D1in the assembled state if needed and/or desired. The lock device526can be configured so that the smallest sprocket SP1is static relative to the lock device526in the axial direction D1in the assembled state before the mounting state if needed and/or desired. In such embodiments, the lock device526is configured to restrict an axial movement of the first sprocket SP1relative to the lock device526in both the first axial direction D11and the second axial direction D12in the assembled state before the mounting state. The at least one radial projection30F of the second lock member530is in contact with the first sprocket SP1to restrict an axial movement of the first sprocket SP1relative to the lock device526in the second axial direction D12. The second axial end28B of the first lock member28is in contact with the first sprocket SP1to restrict an axial movement of the first sprocket SP1relative to the lock device526in the first axial direction D11. As seen inFIG.24, to assemble the first sprocket SP1and the lock device526, the third axial end30A of the second lock member530is inserted into the first sprocket opening SP13. The second external threads30D of the second lock member530is screwed into the first internal threads28E of the first lock member528. The first lock member528and the second lock member530are rotated relative to each other using the first tool and the second tool until the third axial end30A comes into contact with the axial contact surface28F of the first lock member528. Thus, the first sprocket SP1and the lock device26are assembled as the lock device assembly550. As seen inFIG.25, the additional torque transmitting profile SP527of the second sprocket SP502comes into engagement with the plurality of external spline teeth18A of the sprocket support body18. The first axial end28A of the first lock member528is inserted into the second sprocket opening SP23of the second sprocket SP502in a state where the second sprocket SP502is attached to the sprocket support body18. The first axial end28A of the first lock member528comes into contact and threadedly engage with the axial end18B of the sprocket support body18in the assembled state where the first sprocket SP1and the lock device26are assembled as the lock device assembly550. The first sprocket SP1is moved toward the second sprocket SP502to bring the first axially inwardly torque transmitting profile SP16into engagement with the second axially outwardly torque transmitting profile SP26of the second sprocket SP502. The first sprocket SP1is rotated relative to the sprocket support body18about the rotational center axis A1to adjust the rotational position of the first axially inwardly torque transmitting profile SP16relative to the second axially outwardly torque transmitting profile SP26of the second sprocket SP502in a circumferential direction D2with respect to the rotational center axis A1. The lock device526is rotated relative to the sprocket support body18about the rotational center axis A1using the second tool such that the first external threads28D of the first lock member528is screwed into the internal threads18D of the sprocket support body18. The rotational position between the first sprocket SP1and the second sprocket SP502is maintained relative to the sprocket support body18while the lock device526is rotated relative to the sprocket support body18since the first axially inwardly torque transmitting profile SP16of the first sprocket SP1is engaged with the second axially outwardly torque transmitting profile SP26of the second sprocket SP502. The first sprocket SP1and the second sprocket SP502are held between the radial projection30F and the third sprocket SP3in the axial direction D1when the lock device526is tightened using the tool. Thus, the first sprocket SP1and the second sprocket SP502are mounted to the sprocket support body18of the rear hub assembly12using the lock device526. The rear sprocket assembly510can include the tooth-position maintaining member32of the rear sprocket assembly10if needed and/or desired. In the rear sprocket assembly10or510, the third threads30D are configured to threadedly engage with the second threads28E in the assembled state. However, the third axial end30A of the second lock member30or530may be attached to the second axial end28B of the first lock member28or528via other structures such as spline engagement in a press-fitted manner. In the rear sprocket assembly10illustrated inFIG.6, the first lock member28is a separate member from the second lock member30. In the rear sprocket assembly510illustrated inFIG.23, the first lock member528is a separate member from the second lock member530. As seen inFIG.26or27, however, the lock device26or526can be a one-piece, unitary member. In the modification depicted inFIG.26, the first lock member28is integrally provided with the second lock member30as a one-piece unitary member. The at least one radial projection30F of the lock device26can be formed by material deformation. The at least one radial projection30F of the lock device26is formed by material deformation in a state where the first sprocket SP1, the second sprocket SP2, and the tooth-position maintaining member32are provided radially outwardly of the first surface28C and the second surface30C. For example, the at least one radial projection30F of the lock device26is formed by press working in the state where the first sprocket SP1, the second sprocket SP2, and the tooth-position maintaining member32are provided radially outwardly of the first surface28C and the second surface30C. In the modification depicted inFIG.27, the first lock member528is integrally provided with the second lock member530as a one-piece unitary member. The at least one radial projection30F of the lock device526can be formed by material deformation. The at least one radial projection30F of the lock device526is formed by material deformation in a state where the first sprocket SP1is provided radially outwardly of the first surface28C and the second surface30C. For example, the at least one radial projection30F of the lock device526is formed by press working in the state where the first sprocket SP1is provided radially outwardly of the first surface28C and the second surface30C. Each of the structures of the rear sprocket assemblies210,310, and410illustrated inFIGS.19to21can be applied to the rear sprocket assembly510, the modification of the rear sprocket assembly10depicted inFIG.26, and the modification of the rear sprocket assembly510depicted inFIG.27if needed and/or desired. In the present application. the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. This concept also applies to words of similar meaning, for example, the terms “have,” “include” and their derivatives. The terms “member,” “section,” “portion,” “part,” “element,” “body” and “structure” when used in the singular can have the dual meaning of a single part or a plurality of parts. The ordinal numbers such as “first” and “second” recited in the present application are merely identifiers, but do not have any other meanings, for example, a particular order and the like. Moreover, for example, the term “first element” itself does not imply an existence of “second element,” and the term “second element” itself does not imply an existence of “first element.” The term “pair of,” as used herein, can encompass the configuration in which the pair of elements have different shapes or structures from each other in addition to the configuration in which the pair of elements have the same shapes or structures as each other. The terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. The phrase “at least one of” as used in this disclosure means “one or more” of a desired choice. For one example, the phrase “at least one of” as used in this disclosure means “only one single choice” or “both of two choices” if the number of its choices is two. For other example, the phrase “at least one of” as used in this disclosure means “only one single choice” or “any combination of equal to or more than two choices” if the number of its choices is equal to or more than three. For instance, the phrase “at least one of A and B” encompasses (1) A alone, (2), B alone, and (3) both A and B. The phrase “at least one of A, B, and C” encompasses (1) A alone, (2), B alone, (3) C alone, (4) both A and B, (5) both B and C, (6) both A and C, and (7) all A, B, and C. In other words, the phrase “at least one of A and B” does not mean “at least one of A and at least one of B” in this disclosure. Finally, terms of degree such as “substantially,” “about” and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed. All of numerical values described in the present application can be construed as including the terms such as “substantially,” “about” and “approximately.” Obviously, numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein. | 84,919 |
11858589 | DESCRIPTION OF THE EMBODIMENTS The present invention will be described in detail below in conjunction with the drawings. Technical solutions in the embodiments of the present invention will be described clearly and completely in combination with figures in the embodiments of the invention. Obviously, the described embodiments are only part, but not all, of the embodiments of the invention. Based on the embodiments of the present invention, other embodiments acquired by those of ordinary skill in the art without creative work also belong to the protection scope of the present invention. It should be noted that the features in the embodiments and the embodiments of the present invention may be combined with each other in a non-conflicting situation. As shown inFIG.1, the integrated floating photovoltaic device of the present invention can be applied to severe sea conditions, comprising at least one floating photovoltaic unit. The floating photovoltaic units are connected to form a floating photovoltaic device through connecting pieces, which can avoid collision between floating photovoltaic units; as shown inFIGS.2and3, multiple floating photovoltaic units form a single row array or regional array, and the floating photovoltaic array is fixed through a mooring system, to ensure its safety under severe sea conditions; a protective zone with relatively moderate sea conditions is formed in the targeted sea area, for arranging other marine structures that cannot rely on themselves for wave dissipation and resistance; the floating photovoltaic unit comprises a floating system1, photovoltaic systems2and a walkway system3; the floating system1is used for supporting the photovoltaic systems2and bearing wave load impact; the photovoltaic systems2are photovoltaic power generation systems of the floating photovoltaic device, which are key systems of the photovoltaic power generation systems; the walkway system3is arranged between the photovoltaic systems2, and the walkway system3provides convenience for later maintenance of the floating photovoltaic device. As shown inFIG.4, the floating system1is a wave-dissipating floating body arranged along a square area by resonant wave dissipation; the cross-sectional shapes of the floating bodies around the area can be divided into two types: circular cross-section floating body5and square cross-section floating body6; two circular cross-section floating bodies5are parallel and respectively arranged on the wave facing side and the back wave side of the floating photovoltaic device, mainly carrying wave loads and achieving the purposes of wave dissipation and wave resistance. two square cross-section floating bodies6serve as supporting parts, mainly for connecting the circular cross-section floating bodies5on both sides; the square cross-section floating bodies6are vertically connected with the circular cross-section floating bodies5, and the wave-dissipating floating body4is provided with a cross shaped square cross-section floating body in the middle, which can effectively improve the wave dissipation effect of the wave-dissipating floating body4. Several integrally formed connecting columns7are provided outside the four sides of the wave-dissipating floating body4, and the photovoltaic system2is embedded on the circular cross-section floating body5through its own circular shape characteristics. As shown inFIGS.5and6, the connecting columns7consist of two protruding parts of the circular cross-section floating bodies5and cylindrical rods in the protruding parts, and there are multiple connecting columns7on the wave-dissipating floating body4, which are flexibly connected through rubber rings8. Its advantage is to ensure its own movement trend of each floating photovoltaic unit, and to disperse the external force of wave action on the entire floating photovoltaic power station as much as possible. The material properties of the rubber ring8can to some extent avoid collision between floating photovoltaic units. As shown inFIG.7, the photovoltaic system2comprises a support structure9and a photovoltaic module10; it is embedded with the groove on the circular cross-section floating body5, and then connected by bolts or welding, enhancing the overall structural strength of the floating system1and ensuring the working stability of the entire floating photovoltaic device. As shown inFIGS.8and9, the support structure9comprises several I-shaped steels11and several square-shaped steels12, arranged in an array along the length direction of the circular cross-section floating body5. The I-shaped steel11is the main strength component that directly overlaps with the wave-dissipating floating body4to carry the gravity load of the photovoltaic module10and the wave load transmitted through the wave-dissipating floating body4. The square-shaped steel12is an auxiliary component arranged according to the position of I-shaped steel11, and arranged in a “herringbone” shape on the I-shaped steel11to strengthen the relatively weak parts of the photovoltaic system2structure. As shown inFIGS.10,11and12, the photovoltaic module10comprises a reinforced crossbeam13, a roof support14, a thin crossbeam15, a thick crossbeam16, a photovoltaic panel17, and the reinforced crossbeam13is a square-shaped steel pipe, arranged on the I-shaped steel11and used to carry the load transmitted from the I-shaped steel11. Part of the reinforced crossbeams is arranged in an array along the length direction of the wave-dissipating floating body4to form a lower reinforced crossbeam layer, while the other part of reinforced crossbeams is arranged on the lower reinforced crossbeam layer to form an upper reinforced crossbeam layer; the upper reinforced crossbeam layer and the lower reinforced crossbeam layer are vertically arranged and connected through an automatic connecting device18; the roof support14is made of aluminium alloy, with a shape similar to the roof; the bottom of the roof support14is connected to the lower reinforced crossbeam layer through bolts and nuts; the upper reinforced crossbeam layer is located on both inner sides of the roof support14, and the thin crossbeam15and thick crossbeam16are square pipe fittings made of aluminium alloy; the thin crossbeam15is arranged near the top of the roof support14, and the thick crossbeam16is arranged near the bottom of the roof support14; the size difference between thin crossbeam15and thick crossbeam16provides component support for the layout of the photovoltaic panel17, and the photovoltaic panel17is arranged on the roof support14through the thin crossbeam15and the thick crossbeam16. The walkway system3is arranged between the roof supports14. As shown inFIGS.13,14,15and16, reinforced crossbeams are connected through an automatic connecting device18, and the automatic connecting device18comprises a snap-on gripper20, a base21, a button22, a ratchet23, a firing pin24and a spring25; the base21is arranged on the lower reinforced crossbeam layer, and both ends of the base21are provided with “L” shaped notches; the firing pin24is arranged in the “L” shaped notch; one end of the firing pin24is matched with the ratchet23, and the other end of the firing pin24is connected with the spring25; the spring25is arranged in the base24, and the button22is inverted-T shaped; both the lower ends of the button22are in contact with the firing pin24, and the ratchet23is connected to the snap-in gripper20. The snap-in gripper20grips the photovoltaic module10through its own structure. During installation, the upper reinforced crossbeam layer is placed on the base21. The lower structure of the button will press the firing pin down from the longitudinal notch to the transverse groove through its own weight by pressing the button22. When the firing pin24reaches the transverse notch, it will be pushed towards the ratchet23under the action of the spring25. The spring25is installed in the groove between the firing pin24and the base21, and the snap-on gripper20and the ratchet23are fixedly connected to rotate together. At this point, the reed of the ratchet23rotates and clamps the upper reinforced crossbeam layer, and the firing pin24will prevent the ratchet23from rotating in the opposite direction. As shown inFIG.17, the joint between the lower reinforced crossbeam layer and the end of the upper reinforced crossbeam layer is provided with a stop device19, and the stop device19is provided with bolt holes; the stop device19is connected with the lower reinforced crossbeam layer through bolts and nuts, and the protruding part on the upper part of the stop device19is provided with a stop block; the height of the stop block is greater than the height of the two reinforced crossbeam layers, which can limit the movement of the photovoltaic module10connected to the bayonet20of the automatic connecting device18along the length direction of the reinforced crossbeam. As shown inFIGS.18and19, the walkway system3comprises several aluminum alloy walkway panels26; both sides of the walkway panel26are provided with connecting structures27along the width direction, and the walkway panel is connected with the photovoltaic system2through the connecting structure27. The walkway panel26is arranged between the roof supports14for later maintenance and replacement of photovoltaic modules10. Finally, it should be noted that the above embodiments are only used to explain the technical solution of the present invention and shall not be construed as limitation thereof; although the present invention is described in detail with reference to the embodiments, those of ordinary skill in the art shall understand that they may still modify the technical solution recorded in the embodiments or equivalently replace some or all of technical features. these modifications or replacements do not make the essence of the corresponding technical proposal break away from the range of technical proposal of the embodiments in the present invention. | 10,008 |
11858590 | Like reference numerals refer to like parts throughout the several views of the drawings. DETAILED DESCRIPTION OF THE PRESENT INVENTION Sailboats100are designed and built in all sizes, with the present invention being directed towards sailboats100that are transported on a trailer180, as illustrated inFIGS.1and2. The trailer180is being towed by a vehicle300. The sailboat100includes a bulwark114defined by a structure extending upward from a deck112, the deck112and bulwark114being assembled to an upper region of a hull110. A bow pulpit130is preferably secured to a bow region of the sailboat100. Handrails supported by a series of pillars (illustrated but not identified) are provided around a perimeter of the sailboat100to enable safe passage of the crew from the stern to the bow and vice versa. A keel116extends downward from a lower portion of the hull110. The keel116is designed to stabilize the sailboat100against a force generated by wind against a sail while underway. Masts120of sailboats100sized to be transported on the trailer180are commonly designed to be raised (stepped) and lowered (unstepped) for transporting of the sailboat100. The mast is typically hollow, having a mast interior wall122, to obtain a desired strength while minimizing a weight. A mast pivot assembly140is provided to aid in stepping and unstepping the mast120of the sailboat100. The mast pivot assembly140can be of any known design that aids in rotating the mast120from an upright orientation to a lowered, generally horizontal orientation. An exemplary mast pivot assembly140is detailed in the illustrations presented inFIGS.11and12. The exemplary mast pivot assembly140includes a pivot to mast assembly feature146carried by a mast pivot hinge arm144. The mast pivot hinge arm144and a mast receiving base feature148are pivotally assembled to one another by a mast pivot hinge142. The pivot to mast assembly feature146is temporarily assembled to the mast120using any known temporary assembly interface. In the exemplary illustration, the pivot to mast assembly feature146is slideably inserted into a receiving slot located proximate a base of the mast120. The mast receiving base feature148extends proud of the upper surface of the bulwark114and is of a size and shape to be inserted into the interior of the mast120and seated against the mast interior wall122. The mast receiving base feature148retains the base of the mast120from any longitudinal and/or lateral movements while the rigging retains the mast upright and properly seated about the mast receiving base feature148. The mast pivot assembly140is only exemplary and sailboats100have many different designs and mechanics that enable the same function thereof. The mast120is secured in a generally horizontal orientation enabling transportation of the sailboat100on a trailer180. A first end of the trailer180is seated within a mast support crutch152of a mast support column150. A second, opposite end of the trailer180is seated onto the bow pulpit130directly, within a clutch supported by or proximate to the trailer180of the sailboat100, or within a forward crutch extending upward from a forward portion of the trailer frame182of the trailer180. The mast support column150can be secured to the sailboat100, as illustrated or to the trailer180. The exemplary mast support column150is integrated into a rudder assembly, which includes a rudder118rotationally and pivotally assembled to the sailboat100by a rudder stowage pivot support119. A pivot axle enables vertical rotation of the rudder118between a steering position and a stowage position (as illustrated). The mast support column150extends upward from the rudder assembly. The mast support column150can be removable for storage during sailing. The mast support crutch152is supported at an upper end of the mast support column150. The sailboat100can be transported upon a trailer180. The trailer180includes a trailer frame182comprising a pair of frame members, preferably formed in a “Y” shape. A trailer coupler188is assembled to a leading end of the trailer180. The trailer coupler188includes a ball receiver and a ball latching mechanism (including an underjaw, a handle assembly operationally engaging with the underjaw and a spring). At least one trailer wheel183is provided at each side of an axel (not illustrated) of the trailer180. Each trailer bed184of a pair of trailer beds184is assembled to a respective side of the trailer frame182of the trailer180by a series of supporting columns (illustrated by not identified). The hull110of the sailboat100is seated upon and supported by the pair of trailer beds184. The trailer beds184are of a height and position to provide adequate clearance for the keel116of the sailboat100. A bow support (not illustrated) and an associated winch or other latching or retention mechanism can be provided at a forward position on the trailer180to retain the sailboat100in position on the trailer180. A mast righting assistance assembly200, detailed in an exploded assembly view illustrated inFIG.3, is introduced to aid a sailing enthusiast in raising and lowering the mast120of the sailboat100. The mast righting assistance assembly200includes a mast righting assistance column subassembly204preferably designed to be detachably assembled to a mast righting assistance operational subassembly202. The mast righting assistance operational subassembly202is designed to be inserted between a vehicle hitch receiver assembly310(integral with the vehicle300) and a trailer hitch assembly330. A trailer hitch extension subassembly210includes a trailer hitch extension receiver tube212at a first end and a trailer hitch extension insert232at a second end. The trailer hitch extension insert232is of a size and shape to be inserted into and supported by a receiver defined by a vehicle hitch receiver tubular interior wall313of a vehicle hitch receiver tube312of the vehicle hitch receiver assembly310. A vehicle hitch receiver locking member318is inserted through a vehicle hitch receiver locking aperture314of the vehicle hitch receiver tube312, passing through of trailer hitch extension insert locking aperture234of the inserted trailer hitch extension insert232. A vehicle hitch receiver locking member retention aperture319formed through the vehicle hitch receiver locking member318is exposed on an opposing side of the vehicle hitch receiver tube312, where a retention member is inserted through the vehicle hitch receiver locking member retention aperture319, thus retaining the vehicle hitch receiver locking member318in position to ensure and maintain assembly of the trailer hitch extension insert232(thus the mast righting assistance operational subassembly202) to the vehicle hitch receiver assembly310. The retention member can be a locking pin, a hairpin, a lock, or any other suitable element. Similarly, a trailer hitch insert332of the trailer hitch assembly330is of a size and shape to be inserted into and supported by a receiver defined by a trailer hitch extension receiver tubular interior wall213of a trailer hitch extension receiver tube212of the trailer hitch extension subassembly210. A trailer hitch extension receiver locking member218is inserted through a trailer hitch extension receiver locking aperture214of the trailer hitch extension receiver tube212, passing through the trailer hitch insert locking aperture334of the trailer hitch insert332. A trailer hitch extension receiver locking member retention aperture219formed through the trailer hitch extension receiver locking member218is exposed on an opposing side of the trailer hitch extension receiver tube212, where a retention member is inserted through the trailer hitch extension receiver locking member retention aperture219, thus retaining the trailer hitch extension receiver locking member218in position to ensure and maintain assembly of the trailer hitch insert332(thus the trailer hitch assembly330) to the trailer hitch extension subassembly210. The trailer hitch assembly330additionally includes a trailer hitch ball mount336provided at an exposed end of the trailer hitch insert332. A trailer hitch ball338is carried by the trailer hitch ball mount336. In a common arrangement, a threaded post of the trailer hitch ball338is inserted through an aperture and secured to the trailer hitch ball mount336by a nut threadably assembled to the threaded post of the trailer hitch ball338. A winch subassembly220is carried by the trailer hitch extension subassembly210. The winch subassembly220includes a winch motor222which controls rotation of a drum; the drum being designed to collect and dispense a length of a cable224. The cable224can be a rope, a braided rope, a wire cable, a braided cable, a steel core cable, or any other suitable, flexible tension applying member. A winch cable free end loop225is preferably provided at a free end of the cable224enabling connection of the cable224to other objects. The winch subassembly220can be assembled to the trailer hitch extension subassembly210in any suitable orientation. The winch subassembly220can be assembled to the trailer hitch extension subassembly210using any suitable assembly interface. In the exemplary illustration, the winch subassembly220includes a block226, which includes through holes for receiving threaded assembly members, such as threaded bolts (illustrated but not identified). The threaded members can be secured using nuts, washers, locking washers, and/or any other elements commonly used for mechanically fastening one element to a second element. In one alterative arrangement, the winch subassembly220can be secured to the trailer hitch extension subassembly210using one or more brackets forming a mechanical assembly. In another alterative arrangement, the winch subassembly220can be secured to the trailer hitch extension subassembly210using a welding process. In yet another alterative arrangement, the winch subassembly220can be secured to the trailer hitch extension subassembly210using one or more straps. In yet another alterative arrangement, the winch subassembly220can be secured to the trailer hitch extension subassembly210using any combination of the above described assembly methods or any other suitable assembly method. A fairlead (not illustrated but well known by those skilled in the art) can be included in the mast righting assistance operational subassembly202, wherein the fairlead is located to aid in proper collection and dispensing of the cable224to and from the drum of the winch subassembly220. The winch subassembly220can be operated by a winch controller280. The winch controller280can be wired or wireless. The winch controller280preferably includes a winch controller collection button282to actuate a drum rotation to collect a length of the winch cable224and a winch controller dispense button284to actuate a drum rotation to unspool a length of the winch cable224. Other components included in the mast righting assistance assembly200include a block226and a shackle228. The block226is representative of any cable redirecting component. The shackle228is representative of any joining component. A column subassembly receiver240is assembled to the trailer hitch extension subassembly210. In the exemplary illustration, the column subassembly receiver240is welded to an upper surface of the trailer hitch extension subassembly210. The column subassembly receiver240can be secured to the trailer hitch extension subassembly210at any suitable location, including on the top surface (as illustrated), on one or both sides of the trailer hitch extension subassembly210, to a bracket carried by the trailer hitch extension subassembly210, or any other suitable arrangement. The column subassembly receiver240can be secured to the trailer hitch extension subassembly210using any suitable attachment interface, including welding, use of fasteners, use of mechanical fasteners, use of threaded fasteners, use of one or more brackets, a mechanical loop that extends partially or completely around the trailer hitch extension subassembly210, or any other suitable assembly method. The column subassembly receiver240is designed to receive and support the mast righting assistance column subassembly204. In the exemplary illustration, the column subassembly receiver240includes a cavity defined by a column subassembly receiver tubular interior wall243, wherein the cavity is of a size and shape to receive and adequately support a base end of a base column member250of the mast righting assistance column subassembly204. The exemplary mast righting assistance column subassembly204includes a base column member250, a central column member260and an upper column member270. The exemplary base column member250is a tubular member comprising a series of base column member adjustment apertures252spatially arranged through opposing walls of the base column member250. The exemplary central column member260is a tubular member comprising a series of central column member adjustment apertures262spatially arranged through opposing walls of the central column member260. An exterior of the central column member260is of a size and shape to be inserted into and adequately supported by a base column member tubular interior wall254of the base column member250. The exemplary upper column member270is a tubular member comprising a series of upper column member adjustment apertures272spatially arranged through opposing walls of the upper column member270. An exterior of the upper column member270is of a size and shape to be inserted into and adequately supported by a central column member tubular interior wall264of the central column member260. In the exemplary mast righting assistance column subassembly204, a threaded elongated member of a block attachment member278is inserted through a upper column member block attachment aperture276formed through opposite walls of the upper column member270and secured in position by threadably assembling a block attachment member nut279to the threaded elongated member of the block attachment member278. The upper column member270can be tubular, similar to the other column members250,260, the upper column member270having an upper column member tubular interior wall274. Details of a height adjustability of the mast righting assistance column subassembly204are demonstrated the illustrations presented inFIGS.4and5. In the exemplary illustrations, a base of the base column member250is inserted into the interior (defined by the column subassembly receiver tubular interior wall243) of the column subassembly receiver240. A column subassembly receiver locking member248is inserted through a column subassembly receiver locking aperture244and an associated aperture252formed through the sidewall of the base column member250, supporting the base column member250at a desired height. The central column member260is slideably inserted into an interior of the base column member250defined by the base column member tubular interior wall254. The upper column member270is slideably inserted into an interior of the central column member260defined by the central column member tubular interior wall264. The height of each of the central column member260and the upper column member270are adjusted to position the block attachment member278at a desired height. As each of the central column member260and the upper column member270are at their desired position (height), a column height locking member258,268is inserted through a respective base column member adjustment apertures252262to retain the column members250,260,270at the desired vertical positions. Although the exemplary illustrations and associated disclosure describes a mast righting assistance column subassembly204having a specific arrangement, the mast righting assistance column subassembly204can be any arrangement allowing vertical adjustment of the block attachment member278. This can include a telescoping design, a ratcheting assembly, a hydraulically height adjusting assembly, a pneumatically height adjusting assembly, a series of members that are assembled to one another to adjust an overall height, a substantially tall member that provides a maximum height to achieve a minimum desired angle of tension, a scissor styled assembly, or any other height adjustable design. The mast righting assistance column subassembly204would be designed to sufficiently support the forces required (including any factor of safety) to raise and lower the mast120of the sailboat100. At some point during the staging of the mast righting assistance assembly200, a block226is secured to the block attachment member278. The block226can be secured to the block attachment member278using any suitable attachment member or members. In one example, a carabineer is secured to each of the block226and the block attachment member278, supporting the block226at a desired height. In a second example, a cable is used to secure the block226and the block attachment member278to one another, supporting the block226at the desired height. Although several examples of attachment configurations are described herein, any attachment configuration can be any suitable configuration capable of adequately supporting the block226by the mast righting assistance column subassembly204. Operation of the present invention is described in an exemplary mast raising flow diagram400detailed inFIG.13, with supporting illustrations presented inFIGS.6through12and an exemplary reverse process described in a mast lowering flow diagram500being detailed inFIG.14. Prior to raising the mast120, the user would remove any devices (such as straps) that are currently retaining the mast120in a stowed position. The exemplary mast raising flow diagram400initiates with a step of assembling the mast120to the mast pivot assembly140(block410), as illustrated inFIGS.11and12. In the exemplary illustration, the pivot to mast assembly feature146of the mast pivot assembly140is slideably inserted into a receiving formation provided along a respective sidewall at a base of the mast120. Each sailboat100designer may select a uniquely designed mast pivot assembly140. Each mast design may employ a distinct arrangement for joining the base of the mast120and the selected mast pivot assembly140to one another. The key feature of the selected mast pivot assembly140is an ability to pivot the mast120between adown position and an upright position. The mast righting assistance column subassembly204is assembled to the column subassembly receiver240of the trailer hitch extension subassembly210. A height of the mast righting assistance column subassembly204is adjusted to a desired height (block420). The block226is assembled to an upper end of the mast righting assistance column subassembly204using the provided attachment component(s) (block422). Although the assembly of the block226to the mast righting assistance column subassembly204is described following the step of adjusting a height of the mast righting assistance column subassembly204, the order of these steps is not defined. At any point during the staging portion of the process, the winch cable224is threaded through the block226(block424). At any point during the staging process, a power connector of the winch subassembly220may be connected to a mating power connector integrated into a vehicle power system which obtains power from a battery320within the vehicle300. Upon completion of the staging of the mast righting assistance assembly200, the winch cable224is connected to a mast control line126(block430). The mast control line126is preferably any fore mast rigging that can be disconnected without destabilizing the mast120. Alternatively, a specific line can be secured to an upper region of the mast120and used as the mast control line126. A shackle228(or any other suitable connecting component) can be employed to join the mast control line126and the winch cable224to one another (block430). While preparing the mast120for repositioning, a lateral control system128is preferably installed (block432). In the exemplary illustrations, a pair of lateral motion control members128are installed; one end of each lateral motion control member128is secured to an upper region of the mast120and a second, opposite end of the lateral motion control member128is secured to the sailboat100at a location that is in lateral alignment with the mast120(more specifically, the mast pivot hinge142of the mast pivot assembly140) and preferably in horizontal alignment with the mast pivot hinge142of the mast pivot assembly140(block432). The user would then operate the winch subassembly220, such as by actuating a winch controller collection button282on a winch controller280to draw in a length of the winch cable224or actuating a winch controller dispense button284to release a length of the winch cable224. In the exemplary mast raising flow diagram400, the user would actuate the winch controller collection button282to collect a length of the winch cable224, causing the mast120to rise (block440). The height of the mast righting assistance column subassembly204affects a force application angle respective to stowed mast A1, wherein the force application angle respective to stowed mast A1is an angle between the mast control line126and the mast120, as illustrated inFIG.6. The smaller the angle A1, the greater the force required to rotating the mast120into an upright orientation. The greater the height of the mast righting assistance column subassembly204, the greater the force application angle respective to stowed mast A1. The greater the force application angle respective to stowed mast A1, the lower for force required to raise the mast120. Conversely, the greater the height of the mast righting assistance column subassembly204, the more difficult it is to transport the mast righting assistance assembly200. The height adjustment of the mast righting assistance column subassembly204optimizing the mast righting assistance column subassembly204, addressing both concerns. The user would initiate operation of the winch subassembly220, drawing the mast120from a stowed arrangement to an upright arrangement (block440). The mast initiates in the stowed arrangement (FIG.6), is drawn to an intermediate orientation having a force application angle respective to partially raised mast A2(FIG.7) and upon reaching an upright orientation having a force application angle respective to raised mast A3(FIG.8), the operator would cease the collection of winch cable224by the winch subassembly220by releasing the winch controller collection button282. The winch cable224would remain taught while the operator secures a fore rigging member129between the mast120and a forestay fitting127using a suitable coupling member (block450), as illustrated inFIG.9. The fore rigging member129can be any rigging line other than the rigging line currently used as the mast control line126. Upon securing and tightening the fore rigging member129, tension can be released from the winch cable224by depressing the winch controller dispense button284on the winch controller280to unspool a portion of the winch cable224from the drum of the winch subassembly220. The introduced slack enables the operator to disconnect the mast control line126and the winch cable224from one another (block442). The mast control line126can be secured to the target attachment member, such as the forestay fitting127as illustrated inFIG.10, or any other target attachment member. The lateral support elements126can remain in position or be removed. Any additional preparations can be completed and the sailboat100can be launched from the trailer180(block460). Examples of additional preparation can include removing and stowing the mast support column150, rotating and securing the rudder118into a sailing position, and the like. The mast lowering flow diagram500describes a process of lowering the mast120from an upright orientation (as illustrated inFIG.10) into a stowed orientation (as illustrated inFIG.6). Initially, the sailboat100would be placed upon the trailer180, seating the hull110upon the pair of trailer beds184(block510). The rudder118would be rotated into a trailed position (as illustrated). A mast support column150would be installed according to the designed installation process. A height of the mast righting assistance column subassembly204is adjusted to the desired height (block520), as described above. The block226is assembled to an upper end of the mast righting assistance column subassembly204using the provided attachment component(s) (block522), such as previously described. At any point during the staging portion of the process, the winch cable224is threaded through the block226(block524). At any point during the staging process, a power connector of the winch subassembly220may be connected to a mating power connector integrated into a vehicle power system. Tension of the mast control line126is decreased enabling disconnecting of the mast control line126from the forestay fitting127. The freed mast control line126is then connected to the winch cable224(block530). At any point during the preparation of the lowering of the mast120, the lateral motion control members128are installed, as described above (block532). Any slack is removed from the winch cable224by slowly collecting a small length of winch cable224upon the drum of the winch subassembly220using the winch controller280(block540). Once the mast120is supported by the winch cable224, the other retaining rigging, such as the fore rigging member129) is loosened and disconnected (block542). The operator would inspect the system and sailboat100one last time, then once comfortable that the system is properly prepared, the operator would activate the winch controller dispense button284of the winch controller280and slowly unspool a length of the winch cable224from the drum of the winch subassembly220(block550). The unspooling lengthens the winch cable224, lowering the mast120from the upright orientation (FIG.8) to a lowered orientation (FIG.6). The height of the mast righting assistance column subassembly204defines the force application angle respective to stowed mast A1, wherein the force application angle respective to stowed mast A1ensures that the winch cable224remains in control of the mast120throughout the entire lowering process. As the mast120is lowered, the mast120is guided into the mast support crutch152(block552). Once the mast120is seated and adequately supported within the mast support crutch152, the operator can then disconnect the mast control line126and the winch cable224from one another by disconnecting the shackle228(block554). The mast120is separated from the mast pivot assembly140(block556) by reversing the process described above (block556). The mast120is positioned and secured for transport (block560). Although the mast righting assistance assembly200is designed for use in stepping and unstepping a mast120, the mast righting assistance assembly200can be adapted for other applications. For example, the mast righting assistance assembly200can be used to raise and move logs, seat logs onto a log splitter; provide aid to an accident, such as moving one or more vehicles, move or lift a motorcycle, lift objects off a person, and the like. Although the exemplary processes described above employ the mast control line126secured to the mast120, the winch cable224can be secured to a gin pole. The mast righting assistance assembly200can be assembled to the trailer coupler188and supported solely by the ball, thus enabling rotation of the mast righting assistance assembly200to further aid in stepping or unstepping of the mast120. The above-described embodiments are merely exemplary illustrations of implementations set forth for a clear understanding of the principles of the invention. Many variations, combinations, modifications or equivalents may be substituted for elements thereof without departing from the scope of the invention. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all the embodiments falling within the scope of the appended claims. ELEMENT DESCRIPTIONS Ref No. Description 100sailboat110hull112deck114bulwark116keel118rudder119rudder stowage pivot support120mast122mast interior wall126mast control line127forestay fitting128lateral motion control member129fore rigging member130bow pulpit140mast pivot assembly142mast pivot hinge144mast pivot hinge arm146pivot to mast assembly feature148mast receiving base feature150mast support column152mast support crutch180trailer182trailer frame183trailer wheel184trailer bed188trailer coupler200mast righting assistance assembly202mast righting assistance operational subassembly204mast righting assistance column subassembly210trailer hitch extension subassembly212trailer hitch extension receiver tube213trailer hitch extension receiver tubular interior wall214trailer hitch extension receiver locking aperture218trailer hitch extension receiver locking member219trailer hitch extension receiver locking member retention aperture220winch subassembly222winch motor224winch cable225winch cable free end loop226block228shackle232trailer hitch extension insert234trailer hitch extension insert locking aperture240column subassembly receiver243column subassembly receiver tubular interior wall244column subassembly receiver locking aperture248column subassembly receiver locking member250base column member252base column member adjustment apertures254base column member tubular interior wall258first intermediate column height locking member260central column member262central column member adjustment apertures264central column member tubular interior wall268second intermediate column height locking member270upper column member272upper column member adjustment apertures274upper column member tubular interior wall276upper column member block attachment aperture278block attachment member279block attachment member nut280winch controller282winch controller collection button284winch controller dispense button300vehicle310vehicle hitch receiver assembly312vehicle hitch receiver tube313vehicle hitch receiver tubular interior wall314vehicle hitch receiver locking aperture318vehicle hitch receiver locking member319vehicle hitch receiver locking member retention aperture320vehicle battery330trailer hitch assembly332trailer hitch insert334trailer hitch insert locking aperture336trailer hitch ball mount338trailer hitch ball400mast raising flow diagram410assemble mast to hinge assembly step420establish desired height for column step422secure block to top of column step424thread winch cable through block step430connect winch cable and mast control line to one another step432secure lateral control members step440raise mast by retracting cable using winch step450secure forestay when mast is upright step452disconnect winch cable and mast control line from one another step460launch sailboat step500mast lowering flow diagram510assemble mast to hinge assembly step520establish desired height for column step522secure block to top of column step524thread winch cable through block step530connect winch cable and mast control line to one another step532secure lateral control members step540remove slack from winch cable step542release forestay step550lower mast by unspooling cable using winch step552seat mast within crutch step554disconnect winch cable and mast control line from one another step556disassembly mast base from mast pivot assembly step560position and secure mast for transport stepA1force application angle respective to stowed mastA2force application angle respective to partially raised mastA3force application angle respective to raised mast | 31,881 |
11858591 | DETAILED DESCRIPTION Various embodiments are discussed in detail below. While specific embodiments are discussed, this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without departing from the spirit and scope of the present disclosure. In this disclosure and claims, various ranges are identified. Unless context or language indicates otherwise, these ranges include the end points and all the sub-ranges contained therein. FIG.1is a perspective view of a boat100having a side shade assembly200. The boat100includes a bow106, a stern108, and a deck109between the bow106and the stern108. The boat100also has a longitudinal centerline110extending from the bow106to the stern108, which divides the boat100and the deck109into a port side111and a starboard side112. As used herein, directional terms forward (fore), aft, inboard, and outboard have their commonly understood meaning in the art. Relative to the boat100, forward is a direction toward the bow106, and aft is a direction toward the stern108. Likewise, inboard is a direction toward the longitudinal centerline110of the boat and outboard is a direction away from the longitudinal centerline110. Similarly, port is a direction towards the port side111and starboard is a direction towards the starboard side112. The side shade assembly200includes a side shade cover205attached to the port side111of the boat100. The side shade cover205provides cover to passengers from low angles of the sun. In this embodiment, there is another side shade assembly201shown inFIG.1with another side shade cover206on the starboard side112.FIGS.2and3show front and rear views, respectively, of the boat inFIG.1, andFIG.4shows a perspective view of the port side of the boat100inFIG.1. While the following discussion generally references the side shade assembly200and side shade cover205on the port side111, the discussion equally applies to the side shade assembly201and side shade cover206on the starboard side112. The boat100has a hull121, which includes the bow106, the stern108, a port hull side122, and a starboard hull side124. The port hull side122and the starboard hull side124of the hull121may have a port gunwale126and a starboard gunwale128, respectively, that rise above the level of the deck109along the edges of the boat100on the port hull side122and the starboard hull side124, respectively. Other boat types, such as pontoon boats, for example, have a fence, a railing, or another type of safety barrier along the edge of the deck109. The boat100may have one or more cleats442,443(seeFIG.15) attached to the boat100, and more specifically to the hull121, the port hull side122, the starboard hull side124, the port gunwale126, and/or the starboard gunwale128. Alternatively, the cleats442,443may be attached to the deck109or, for a pontoon boat, to a fence or a railing. The cleats442,443facilitate tying the boat100to a dock and securing other equipment or objects to the boat100. In the boat100shown inFIG.1is a bow rider driven by a single inboard motor connected to a propeller by a drive shaft. Although the side shade assembly200is shown and described with reference to a bow rider, the side shade assembly200is applicable to any suitable type of boat, including cuddies, center consoles, pontoon boats, and cruisers, for example. Likewise, the boat100may use other propulsion systems, including but not limited to outboard motors, jet drives, stern drives, and the like. The boat100includes one or more seating areas for passengers. Any suitable type of seating area may be used, including, for example, those described in U.S. Patent Application Publication Nos. 2020/0130786 and 2018/0314487, which are incorporated by reference herein in their entireties. In the embodiment shown inFIG.1, the boat100is a bowrider with a bow seating area114positioned in the bow106of the boat100. The boat100also has a primary seating area115(sometimes also referred to as the cockpit) positioned aft of a windshield116. In addition, the boat100includes a stern seating area117, which may be configured in a forward-facing configuration or an aft-facing seating configuration. Additionally, the primary seating area115includes a control console118for operating the boat100. The control console118can be positioned in the cockpit on either the port side111or the starboard side112of the boat100proximate to and aft of the windshield116. Other types of boats, including cuddies, center consoles, pontoon boats, paddle boats, and cruisers, for example, may have one or more of the seating areas114,115,117, as well as additional different seating areas. As shown inFIG.2, the boat100has a tower130, which may be used for towing a water sports participant, storing water sports equipment, and/or supporting other accessories. Any suitable type of tower may be used including, for example, those described in U.S. Pat. Nos. 9,580,155; 6,539,886; and 10,150,540, which are incorporated by reference herein in their entireties. The tower130may be supported by one or more vertical supports. For example, the tower130may be supported by one or both of a port leg131and a starboard leg132on the port side111and the starboard side112of the boat100, respectively. In some embodiments, a lower portion of the port leg131and the starboard leg132may be attached to the port gunwale126and the starboard gunwale128, respectively, using any suitable means including, for example, bolts, fasteners, welding, and the like. In some embodiments, such as the example of the boat100depicted inFIGS.1through4, the port leg131and the starboard leg132are mirror images of each other. In other embodiments, the port leg131and the starboard leg132may have an asymmetric construction. As shown inFIG.3, the tower130may include a header133, which is connected to upper portions of the port leg131and/or the starboard leg132and spans the deck109of the boat100at a height suitable for passengers to pass underneath while standing. The header133may be attached to the port leg131and/or the starboard leg132using any suitable means including, for example, bolts, fasteners, welding, and the like, or may be integrally formed with the upper portion of the port leg131and/or the upper portion of the starboard leg132. For example, when aluminum tubing is used for both the upper portion of the port leg131, the starboard leg132, and the header133, all three of these components may be formed by bending a single piece of aluminum tubing. The tower130provides a location on which to mount a top shade cover120to protect the occupants of the boat100from the elements (e.g., sun, rain, etc.). Any suitable type of top shade cover may be used including, for example, the bimini top described in U.S. Pat. No. 10,286,982, which is incorporated herein by reference in its entirety. The top shade cover120, which also may be referred to as a bimini top or a bimini, may be movable between a stowed position and a deployed position. For example, the top shade cover120may be a weather-proof or weather-resistant canvas, which may be rolled up or folded in the stowed position when not in use. In this example, the top shade cover120is supported by a bimini frame140, which may be pivotally attached to the port leg131and the starboard leg132. The bimini frame140pivots about this attachment to move between the stowed position and the deployed position, causing the top shade cover120to fold out of the way in the stowed position and to extend over the deck109in the deployed position. Alternatively, the top shade cover120may be a hard-top cover. The hard-top cover may be a plastic, metal, or other rigid material that is waterproof, at least partially opaque to light, and may be protective against ultraviolet radiation. In such embodiments, the hard-top cover may be a stand-alone cover with vertical supports for support above the deck109. These vertical supports may be mounted to one or more of the port gunwale126, the starboard gunwale128, the tower130(e.g., the port leg131and/or the starboard leg132), and the deck109. For the boat100showing inFIGS.1through4, the top shade cover120discussed above is mounted to and supported by a bimini frame140with vertical and transverse supports, which is attached to the tower130(e.g., as shown inFIG.5). The vertical supports of such a bimini frame140support the top shade cover120above the deck109, and the transverse supports support the top shade cover120between the vertical supports over the deck109. The top shade cover120alternatively may be mounted in other locations. For example, the top shade cover120may be used on boats without a tower130, as a stand-alone bimini. The vertical supports of the bimini frame140would then be mounted directly to the port gunwale126, the starboard gunwale128, and/or to the deck109. The top shade cover120covers at least a portion of the deck109. For example, the top shade cover120may be positioned directly over one or more of the seating areas of the boat100, such as the bow seating area114, the primary seating area115(including the control console118), and the stern seating area117. For types of boats other than the bowrider shown inFIGS.1through4, the top shade cover120may be positioned to cover, at least partially, other types of seating areas. The top shade cover120shown inFIG.1has a forward edge141, an aft edge142, a port edge143, and a starboard edge144, due to its rectangular shape, though other shapes with fewer or more edges can be used. The top shade cover120may extend over at least a majority (e.g., greater than 50%) of the primary seating area115. For example, in some embodiments, the top shade cover120extends over the entire extent of the primary seating area115forward of the tower and including the control console118. Although the aft edge142in this embodiment is positioned over the forward portion of the primary seating area115, it is not so limited. In other embodiments, the aft edge142may be positioned so as to cover the entirety of the primary seating area115, or may even be positioned over the stern seating area117, in such a manner that the top shade cover120also provides cover to at least a portion of the stern seating area117. Likewise, the forward edge141may be positioned over the bow seating area114, in such a manner that the top shade cover120also provides cover to at least a portion of the bow seating area114. The top shade cover120may extend over at least a majority of the width of the boat100over the seating areas, and more preferably over the entire width of the boat100from the port hull side122to the starboard hull side124. Based on the distance of the port edge143and the starboard edge144relative to the longitudinal centerline110, the top shade cover120may extend over the full beam width of the boat100, or over a portion of the full beam width, as measured at widest extent from the port side111to the starboard side112. For example, if the port edge143and the starboard edge144are positioned above the port gunwale126and the starboard gunwale128, respectively, then the top shade cover120provides cover to the full beam width of the boat100. In some embodiments where the top shade cover is not rectangular, the distance from the port edge143and the starboard edge144to the longitudinal centerline110may vary depending on position along the longitudinal centerline110, so that the top shade cover120does not provide equal cover to the boat100from the aft edge142to the forward edge141. As discussed, the top shade cover120is positioned in some embodiments directly above one or more of the seating areas114,115,117(including control console118) of the boat100. While the top shade cover120provides shade to passengers seated in the seating areas114,115,117when the sun has a high angle, there are hours (e.g., at some latitudes, from 9:00 to 11:00 in the morning, and from 6:00 to 8:00 in the evening) when the sun is low enough that the top shade cover120does not provide adequate shade to the passengers. During such hours, it still can be quite hot and there can still be substantial ultraviolet exposure, so shade from the sun is desirable. To provide coverage during these hours, the side shade cover205is positioned outboard beyond the deck109, to create shade for one or more of the seating areas114,115,117. After deployment of side shade cover205, one or more of the seating areas114,115,117are at least partially covered at an angle. The side shade cover205may have various geometries including those that have multiple edges. The side shade cover205may, for example, be generally triangular having three corners, such as the side shade cover shown inFIGS.1through4. The side shade cover alternatively may have a quadrilateral shape having four corners, such as a trapezoidal shape (seeFIG.10) or the shapes shown inFIGS.18A and18B. When extended in position to provide shade to the seating areas114,115,117, the side shade cover205may have a leading edge207positioned closer to the bow106and a trailing edge208positioned closer to the stern108. In addition, the side shade cover205may have an inboard edge209and an outboard edge210. The side shade cover205may be positioned to provide shade to any single one or any combination of the seating areas114,115,117. In the example ofFIG.1, the leading edge207of the side shade cover205is positioned closer to the bow106than the forward edge141of the top shade cover120. In other embodiments, such as the example ofFIG.10andFIG.15, the trailing edge208of the side shade cover is positioned closer to the stern108than the aft edge142of the top shade cover120. The positions of the leading edge207and the trailing edge208of the side shade cover205may be varied to provide different levels of cover to the bow seating area114, the primary seating area115, and the stern seating area117. In some embodiments, the side shade cover205is attached to the top shade cover120. The port edge143and the starboard edge144of the top shade cover120also may provide attachment points for the side shade cover205. For example, the side shade cover205, which is on the port side111of the boat100, may attach to at least the port edge143of the top shade cover120along at least the inboard edge209of the side shade cover205, and in some cases also at least partially along the leading edge207and the trailing edge208of the side shade cover205, to provide continuous shade and cover to portions of the seating areas114,115,117with the top shade cover120. Likewise, the side shade cover206, which is installed on the starboard side112of the boat100, may attach to at least the starboard edge144of the top shade cover120along at least the inboard edge209of the side shade cover206. The side shade cover205may be attached to the top shade cover120with a fastener, such as a zipper, along at least a portion of at least one edge of the top shade cover120(such as the port edge143). The use of a fastener allows the side shade cover205to be detachably connected to the top shade cover120. Alternatively, the side shade cover205and the top shade cover120may be a single piece of material, with the side shade cover205stowed by rolling or folding when not in use. In some embodiments, such as when the top shade cover120is a hard-top, the side shade cover205may be a retractable roller shade, which retracts into and extends out of the top shade cover120. In the example ofFIG.1, the side shade cover205and the top shade cover120are made of a weather-proof or weather-resistant canvas, which can be rolled or folded to fill a compact volume in a stowed position. Canvas is a suitable material, due to providing weather-proof protection from rain and water, and being opaque enough to provide sufficient shade and ultraviolet protection from the sun. Another suitable material is PhiferTex®, made by Phifer, Inc. of Tuscaloosa, Ala., or other similar materials with a mesh construction that block a percentage of light (e.g., 50% to 75%) and also provide ultraviolet protection. Such mesh-type materials permit airflow through the material, which is advantageous in preventing a parachute effect when the boat100is underway. Those skilled in the art, however, will recognize that any material suitable for use in an outdoor marine environment and having other suitable characteristics for performing some or all of the functions discussed, as well as other functions (for example, strength, wear resistance, etc.), may be used. Suitable materials include, but are not limited to, canvas, stainless steel, plastic, fiberglass, metal, PhiferTex®, and/or any combination of these and other suitable materials. When deployed outboard, the side shade cover205may incline upwards or downwards from its attachment point to the top shade cover120. In some cases, the height of the side shade cover205above the deck109may vary along the longitudinal centerline110. The height of the side shade cover205may be adjusted to provide shade at a variety of angles of the sun relative to the horizon, and may be adjusted to provide different amounts of shade to the seating areas114,115,117. The farthest outboard edge of the side shade cover205may have a vertical height above the deck in line with the eye level of a seated passenger, or higher, in order to provide protection along the waterline from the sun at very low angles. In embodiments where the side shade cover205has a triangular shape, at least two of the leading edge, trailing edge, inboard edge, and outboard edge may be the same edge. In embodiments where the side shade cover205has a quadrilateral shape, the leading edge, the trailing edge, the inboard edge, and the outboard edge may all be different edges. The edges of the generally triangular shape and the quadrilateral shape are not limited to rectilinear edges, but may instead have curved edges (e.g., the port and starboard side shade covers405,406shown inFIGS.18A and18B). In some embodiments, the side shade cover205has a custom shape adapted for the specific shape of the boat100, and in such embodiments may have different shapes for the port side111and the starboard side112. For example, a side shade cover205intended for installation on the port side111may be a mirror image of a side shade cover206on the starboard side112. Other embodiments may have shapes with more than four edges or four corners. The shape of the side shade cover205may be varied to avoid interfering with other equipment on the boat100, such as board racks, tow lines, and other accessories mounted to the tower130. The position of the side shade cover205can be adjusted at a downwards angle to increase shade coverage from the sun, but not so low that seated occupants in the boat cannot see below the side shade cover205and outside the boat100towards the horizon. Due to the outboard deployment and downward angle, the side shade cover205permits visibility towards the horizon with a far larger field of view than a curtain that is only vertical (e.g., a side curtain). In order to provide effective shade, the side shade cover205may be at least partially opaque. The downward angle allows for an unobstructed field of view towards the horizon and for ventilation, without sacrificing the ability to provide shade. As shown inFIG.4, the side shade assembly200includes a support frame211. The support frame211supports the side shade cover205for mounting and secure attachment to the boat100. The support frame211includes one or more support struts (e.g., support struts215and216) that engage and support the side shade cover205at one end and engage the boat100at the other end. For example, the support struts215,216may be engaged with corners or edges of the side shade cover205using hooks that engage stitched and reinforced holes in the side shade cover205, pockets or sleeves that receive the end of the support struts215,216, or any other suitable mechanism. While the side shade assembly200is shown attached to the port side111of the boat100, the side shade assembly200(or, depending on the configuration of the boat100, another side shade assembly201that is a mirror image) also could be attached to the starboard side112of the boat100. In some embodiments, the side shade assembly200can be configured to interchangeably attach to either side of the boat100or as noted above, multiple side shade assemblies200,201can be used, such as one on each of the port side111and the starboard side112of the boat100, as shown inFIGS.1through4. In some embodiments, the side shade assembly200,201includes side shade covers for both the port side111and the starboard side112. FIG.5provides a detail view of the side shade assembly200, here attached to the port side111of the boat100. However, this discussion applies equally when the side shade assembly200is attached to the starboard side112of the boat. In this example, the support frame211of the side shade assembly200is mounted to the port leg131of the tower130, although the support frame211alternatively could be attached to other portions of the boat100, such as the port gunwale126or the deck109. In this example, the support frame211has a bracket212that attaches directly to the port leg131of the tower130. Two support struts215,216extend from the bracket212to provide tension to the side shade cover205, so as to extend the side shade cover205taut in the outboard position beyond the deck109. In other words, the support struts215,216provide tension to the side shade cover205when extended outboard. The mounts for the support struts215,216on the bracket212may be pivotable and rotatable to allow the support struts215,216to have adjustable positions in some embodiments. The support struts215,216define the angle of the side shade cover205relative to the deck109(or alternatively, relative to the top shade cover120). In some embodiments, the angle is adjustable, for example, by pivoting the angle of the support struts215,216within the housing of the support frame211and/or extending or shortening the length of the support struts215,216. Various suitable mechanisms for changing the length of the support struts215,216can be used, such as those discussed with respect toFIGS.19A and14C. As discussed with reference toFIGS.1through4, the side shade cover205has multiple corners and edges, depending on its shape. The support struts215,216are anchored at one end to the support frame211, which is attached to the port leg131of the tower130, and are engaged with the corners or edges of the side shade cover205at their other ends to provide tension and keep the side shade cover205taut at the desired angle. The front and rear views of the boat100, shown inFIGS.2and3, respectively, illustrate how the side shade cover205extends outboard beyond the deck109in a direction away from the longitudinal centerline110at a downwards angle relative to the deck. The angle and width of the side shade cover determines the amount of shade provided to passengers in the boat100. FIG.6is a cross-sectional schematic that illustrates the geometry of the side shade cover206for a seated passenger155in the boat100, who is seated in the primary seating area115near the edge of the boat100on the starboard side112, with their eye level above the deck109and the starboard gunwale128. The top shade cover120, located at a height h above the deck109, provides shade to the passenger155from directly above, i.e., when the sun is at position A, at an elevation angle of 90° relative to the deck109. Since the top shade cover120extends out at maximum to the edge of the deck109and gunwale128, the top shade cover120is only able to provide shade for angles of the sun in the sky from position A (90°) to position B, represented by angle α (relative to the deck109). Angle α may range from 89° to 60° in some embodiments, though the actual maximum angle α for which the top shade cover120is dependent on the boat geometry and the position of the passenger155. For another passenger156seated farther inboard than passenger155, e.g., seated in the stern seating area117, the angle α will decrease (and be different than for passenger155). If a passenger were seated directly on the starboard gunwale128, then the angle α would be 90°, i.e., the top shade cover120could not provide shade at any position other than position A. For positions of the sun in the sky lower than position B with an angle relative to the deck109that is smaller than angle α, the top shade cover120is unable to provide shade to the passenger155seated in the position shown inFIG.6. As a result, to provide shade at these lower angles of the sun, side shade cover206may be deployed at an angle β relative to the deck109. Depending on the configuration of the side shade cover206, the angle β of the side shade cover206can vary from 0° to 90°. At large angles of β (e.g., from 75° up to 90°), the side shade cover206also may be used as a water intrusion inhibiter. In other words, the side shade cover206may protect the passenger155from water splashing into the boat from waves, wakes, wind, rain, etc. by lowering the side shade cover206further. This protection against water comes at the expense of visibility towards the horizon, though in such cases where water protection is desired, shade may not be the passengers' primary concern. Therefore, the side shade cover206functions not just as a protection from the sun but also as a protection from the water. Note that, in some embodiments, the side shade cover206may no longer extend outboard beyond the deck109where angle β is 90°. For example, if the support struts (e.g., support struts315,316,317) are removed, then the side shade cover206would no longer have support to extend beyond the deck109, and would instead hang downwards from the attachment point (e.g., fastener345) along the outboard edge (e.g., starboard edge144) of the top shade cover120. The side shade cover206could then be secured in the vertical (β=90°) position, for example by fastening the outboard edge of the side shade cover206to portions of the gunwales (e.g., cleats442,443). In this position, however, the advantages of visibility relative to a side curtain are lost. The advantages of a side shade cover206(when configured such that β<90°) relative to a side curtain (β=90°) are discussed further with reference toFIG.9. As shown inFIG.7, at an angle β of 0°, the side shade cover206extends horizontally beyond the starboard gunwale128and the deck109. At non-zero values of β, the side shade cover extends at an angle, which may be upwards (for negative values of β) or downwards (for positive values of β). However, common ranges of values for β may be from 25° to 75°, or more commonly from 45° to 60°. These ranges are examples of effective ranges for providing shade at positions of the sun from position B to position C. These ranges are effective for providing shade without obstructing (or at least minimizing the obstruction of) the field of view of a passenger155towards the horizon or horizontally along the plane of the deck109, as discussed further with respect toFIG.9. Position C, the lowest position of the sun for which the side shade cover206does not provide shade, is represented by angle γ, which may range in some embodiments from 15° to 45° in some embodiments, though the actual minimum angle γ is dependent on the boat geometry and the position of the passenger155. For the other passenger156seated more inboard, with the side shade cover206in the same position, the angle γ will be lower than for passenger155. In the discussion above, the angles α, β, and γ are all defined relative to the deck109. However, one or more of these angles may alternatively be defined relative to the top shade cover120. For example, if the top shade cover120is curved with a different height h at different positions along the longitudinal centerline110, then the side shade cover206can be configured to also vary to provide consistent shade for different positions of the sun (A, B, C)—e.g., consistent values of α, β, and γ along the centerline, or different values of α, β, and γ depending on the amount of shade desired for each of the seating areas114,115,117. As noted above, the side shade cover206may be in a horizontal position, corresponding to an angle β of 0°. However, as shown inFIG.7, a horizontal side shade cover213typically extends farther outboard from the boat100to provide the same cover to occupants of the boat as a downwardly-angled side shade cover206, and therefore would require more structural support. Consider a side shade cover206of length d0extended outboard at a non-zero angle β to provide shade to passenger155for angles of the sun higher than a given angle γ. In order to provide the same shade coverage to passenger155, a horizontal (i.e., β=0°) side shade cover213must have a longer length d1. The required length can be expressed according to the following Equation (1): d1=d0(cosβ+sinβtanγ)(1) FIG.8shows a plot of Equation (1) for different values of β and γ. For a very high angle of the sun (e.g., γ=60°), the length d1of the horizontal side shade cover ranges from at maximum 11% longer than the length d0of the angled side shade cover (for low values of β, e.g. β smaller than 60°), to being at minimum 60% of the length of d0(for high values of β, e.g. β greater than 60°). For an intermediate angle of the sun (e.g., γ=45°), the length d1of the horizontal side shade cover ranges from nearly equal to d0to 41% longer than d0. For a low angle of the sun (e.g., γ=30°), the length d1of the horizontal side shade cover is always longer than d0, up to twice as long. For even lower angles of the sun (e.g., γ=15°), the length d1of the horizontal side shade cover is almost four times longer. The advantage of having a shorter length side shade cover (d0<d1) is that the support frame211requires fewer and/or shorter support struts215,216to support a side shade cover205when it is angled (β>0) than when it is horizontal (β=0). Further examples for various values of β and γ of d1relative to d0calculated from Equation (1) are provided in Table 1. The value of d0is assumed to be a unit length, so the value of d1shown is the ratio of d1to d0. In some embodiments, the side shade cover205may be configured at different angles β as needed, by adjusting the position of one or more support struts215,216, in order to provide shade coverage at different times of day (i.e., different values of γ). The length of the side shade cover accordingly may be adjusted depending on the configured angle β, for example by rolling up the side shade cover205if made of canvas, as well as other suitable contemplated mechanisms. TABLE 1γβd1/d015753.915603.715453.315302.730751.930602.030451.930301.745751.245601.445451.445301.460750.860601.060451.160301.2 The side shade cover extends outboard beyond the deck and the hull, with an angle β<90° relative to the deck109(or, in some embodiments, relative to the top shade cover120). In contrast, a side curtain hangs directly downwards, at an angle β=90° (i.e., perpendicular to the deck109and/or the top shade cover120).FIG.9illustrates a comparison between a side shade cover206at an angle β relative to the deck109and a side curtain214. In this example, both the side shade cover206and the side curtain214are the same length d0, and both attach to the starboard edge144of the top shade cover120. Due to the obstruction by the side curtain214, the line of sight for a passenger155makes an angle of δ relative to the deck109as the passenger155looks outside the boat100. In this example, the passenger155is standing, but alternatively could be seated, and the same discussion still would apply. The distance d2to where the passenger's155line of sight intersects the horizontal plane of the deck109is therefore defined by Equation (2): d2=h2tan(δ)(2) In Equation (2), h2is the eye level of the passenger155above the deck109. If the side shade cover206is installed instead of the side curtain214, the line of sight makes an angle ε relative to the deck109. The distance d3to where the passenger's155line of sight intersects the horizontal plane of the deck is then defined by Equation (3): d3=h2tan(ε)(3) Since the side curtain214and the side shade cover206have the same length d0, and since the angle b of the side shade cover206is less than the angle (90°) of the side curtain, it can be shown that the angle δ is greater than the angle ε. Accordingly, tan(δ) is greater than tan(ε), but due to the inverse in Equation (2) and Equation (3), this makes the value of d3greater than d2. In other words, by angling the side shade cover206so that it extends outboard beyond the boat100, the view of the passenger155is substantially less obstructed. Higher angles β of the side shade cover206provide geometrically greater distance of view beyond the boat, compared to a side curtain214. In order to provide effective shade coverage for a wide range of angles of the sun (e.g., γ=15° to 60°), the range for β preferably is from 30° to 75°. At low angles of the sun (γ=15° to 45°), the range for β preferably is from 45° to 75°. At high angles of the sun (γ=45° to 60°), the range for β preferably is from 30° to 45°. These values of β for the side shade cover provide a balance between effective shade coverage without unwieldy length and preservation of field of view. In other words, these preferred ranges of β for the side shade cover are high enough to provide equivalent coverage as a horizontal side shade cover, but with shorter length, requiring less structural support since the angled side shade cover does not extend as far outboard. These preferred ranges are also low enough to also provide substantially increased field of view towards the horizon compared to a side curtain of equal length. The side shade assembly200shown inFIG.5uses two support struts215,216. In other embodiments, the support frame211may use a different number of support struts, such as a single support strut, three support struts, or more than three support struts. For example,FIG.10shows an embodiment of a side shade assembly300that includes a side shade cover305supported by three support struts315,316,317. Though shown inFIG.10mounted to the starboard side112of the boat100, the side shade assembly300could be mounted on the port side111, or an assembly could be mounted on each side. The side shade assembly300shown inFIG.10is similar to the embodiment of the side shade assembly200discussed above with respect toFIGS.1through5, and like reference numerals have been used to refer to the same or similar components. A detailed description of these components will be omitted, and the following discussion focuses on the differences between these embodiments. Any of the various features discussed with any one of the embodiments discussed herein may also apply to and be used with any other embodiments. Though similar to the support struts215,216described with reference toFIG.5, the support struts315,316,317attach directly to the starboard gunwale128in this embodiment. In other embodiments, the support struts315,316,317may attach to other suitable locations on the boat100, such as the port leg131or the starboard leg132of the tower130, or to the deck109. FIG.11shows a cross-section aft view of the boat100and the side shade assembly300taken along section line11-11inFIG.10.FIG.12shows an overhead view of the boat100and the side shade assembly300fromFIG.10, with the side shade cover305and the top shade cover120omitted for clarity. InFIGS.11and12, the side shade assembly300is shown mounted to the starboard side112of the boat100, though the side shade assembly300could be mounted on the port side111, or an assembly could be mounted on each side. As shown, the top shade cover120provides shade coverage for at least one of the seating areas114,115,117(including the control console118) receive shade by when the sun is directly overhead, and the side shade cover305provide shade coverage when the sun is at an angle from the starboard side112. Some embodiments of the side shade assembly200,300,400discussed herein may be removable and modular. For example, the side shade assembly300may be disassembled into its component parts (e.g., the side shade cover305and the support struts315,316,317) for easy storage and stowing when not in use. As shown in the example ofFIGS.11and12, the side shade cover305is attached with a fastener345, such as a zipper, to the top shade cover120, more specifically to the starboard edge144of the top shade cover120. The side shade cover305can be easily attached and removed as desired using the fastener345. Preferably, the fastener345, such as a zipper, creates a watertight seal between the side shade cover305and the top shade cover120. The support struts315,316,317may be removably attached to the boat100and the side shade cover305. For example, each of the support struts315,316,317may have a hook at one end, which engages with a respective loop, grommet, or ring351,352,353in the side shade cover305. The ring351,352,353may be metal or plastic, or may be a loop of the same material as the side shade cover305. The support struts315,316,317also may be removably attached at the other end to the deck109or the starboard gunwale128. For example, the starboard gunwale128may have hollow receivers, into which the support struts315,316,317are inserted. The support struts315,316,317may be further secured in the receiver by a pin, a strap, a locking button, threads, or other locking and securing mechanisms. FIG.13shows a detail view of support strut317, which has a hook355at one end to engage with a ring353on the side shade cover305. The support strut317also has a joint356, which allows the support strut317to bend by folding, and a latch357to allow the joint356to lock in a fully-extended position. Alternatively, two or more portions of the support strut317may be pivotably connected to each other with a joint, such that the support strut317is fully extended when the portions are pivoted to extend in opposite directions. In this example, the other end of the support strut317opposite from the hook355has threads360which allow the support strut317to be screwed into a receiver362on the starboard gunwale128. Other methods of attaching the support strut317can be used, including having a hook (not shown) at the other end that engages with a loop, bracket, or hole on the starboard gunwale128, the starboard leg132, or the deck109. The other support struts315,316in this example may be identical to and interchangeable with support strut317. FIGS.14A through14Cillustrate an example of an installation process for the side shade assembly300described inFIGS.10through12. Similar steps may apply to other embodiments, such as the side shade assembly200or the side shade assembly400. The steps may be performed in a different order than the order in which they are described below. FIG.14Ashows a first step in installing the side shade assembly300, which is to attach the side shade cover305to the top shade cover120using the fastener345. The side shade cover305will then hang down the side of the starboard leg132, since there is nothing at this stage of installation to provide support in the outboard position. FIG.14Bshows a second step in installing the side shade assembly300, which is to engage the hook355of the support strut317with a ring353of the side shade cover305. In addition, the receiver end of the support strut317is inserted into a threaded receiver362on the starboard gunwale128and secured, by screwing the support strut317into the receiver362such that the threads360are engaged. Note that engaging the hook355and inserting the support strut317into the receiver362may be done in reverse order. Further, the support strut317is still in its bent configuration since the latch357has not yet been engaged to securely lock the joint356. FIG.14Cshows a third step in installing the side shade assembly300, which is to press the support strut317in the middle, pivoting and straightening the support strut317until the latch357is engaged and the joint356is securely locked. This puts tension in the side shade cover305and keeps the side shade cover305extended outboard away from the boat100. The angle of the receiver362and the length of the support strut317may be varied or configured to adjust the angle at which the side shade cover305extends outboard. For example, the support strut317may be a type of extendable-length strut, such as a telescoping strut, a sliding strut, or a segmented strut with at least one optional segment that can be sequentially attached to other segments by button locks, twist locks, tension locks, or threaded sockets. Other mechanisms for changing the length of the support strut317or the angle of the receiver are also contemplated. To remove the side shade assembly300, the steps described inFIGS.14A through14Cmay be performed in the opposite order. FIG.15shows another embodiment of a side shade assembly400, mounted in this example on the port side111of the boat100. While similar to the embodiments of the side shade assemblies200,300discussed with respect toFIGS.1through12, a detailed description of these components will be omitted, and the following discussion focuses on the differences between these embodiments. Any of the various features discussed with any one of the embodiments discussed herein may also apply to and be used with any other embodiments. Here, the side shade assembly400includes a port side shade cover405that is supported in the outboard position by a port support frame410. The side shade assembly400also may include a starboard support frame and starboard side shade cover (not shown). As shown inFIG.18C, the port support frame410of this embodiment includes a center support strut415as well as lateral support struts417,418to provide additional support to the port side shade cover405in the forward and aft directions. The lateral support struts417,418attach to the port center support strut415at a port central assembly425, forming a T-shape, where the base of the T-shape (the port center support strut415) attaches to the boat100and the arms of the T-shape (the lateral support struts417,418) extend forward and aft. Alternatively, the side shade assembly400may have only a single lateral support strut, or more than two lateral support struts. In addition, the side shade assembly400includes support straps440,441, each of which attaches at one end to one of the lateral support struts417,418and attaches at the other end to the boat100, for example to cleats442,443on the port gunwale126. The support straps440,441may attach to the cleats442,443by being tied, hooked into a loop, or any other suitable securing means. The port center support strut415removably attaches to the port leg131on the tower130in this example by a hook (e.g., hooks416a,416bshown inFIG.19), which engages with a support ring445that is attached to the port leg131, though any suitable method or fastener for attachment or removable attachment is contemplated, including a bracket or a socket. FIG.16shows an alternate mounting location for the port center support strut415, in which the port center support strut415mounts directly to the bimini frame140, instead of to the port leg131of the tower130. The port center support strut415, the port central assembly425, and the lateral support struts417,418are visible inFIG.16, and the port side shade cover405is translucent with a different opacity than the top shade cover120. In other embodiments, the port side shade cover405and the top shade cover120have the same opacity. In the example ofFIG.16, the port center support strut415is supported by a bracket412that attaches directly to the bimini frame140, instead of the port leg131. A detail view of the bracket412is shown inFIG.17, which illustrates a quick-release mechanism413to enable the side shade assembly400to be quickly disengaged from the boat100. FIGS.18A through18Dshow detail views of certain components of the side shade assembly400.FIG.18Ashows the port side shade cover405,FIG.18Bshows the starboard side shade cover406,FIG.18Cshows the port support frame410, andFIG.18Dshows a storage and transport bag411. Although not shown, the starboard support frame is identical. The port side shade cover405is not identical to the starboard side shade cover406since they are mirror images of each other. In order to assist in assembling the side shade assembly400, in this example the starboard side shade cover406has single notches in the fabric, and the port shade cover405has double notches. Other ways to distinguish the port and starboard side shade covers405,406can be employed, such as including stitching or printing the words “port” and “starboard” onto the covers or onto labels attached to the covers. FIG.19Ashows a port center support strut415for the port support frame410.FIG.19Bshows a corresponding starboard center support strut419for the starboard support frame (not shown). The port and starboard center support struts415,419each have hooks416a,416bat one end, for attaching to a deck109, port gunwale126, starboard gunwale128, bimini frame140, or tower130of a boat100. In addition, the port and starboard center support struts415,419have a number of button locks421a,421b, which allow the length of the port and starboard center support struts415,419to be adjusted. These adjustments permit the port and starboard side shade covers405,406to be attached at a desired angles outboard. Note that the port and starboard center support struts415,419may be independently adjusted to different lengths if desired, so that the port side shade cover405can be positioned at a different angle of extension outboard beyond the deck than the starboard side shade cover406. Examples of mechanisms for adjusting the length of the port and starboard center support struts415,419are discussed with respect to the side shade assembly300inFIG.14C. FIG.20shows an example of a port central assembly425for the port support frame410. The starboard central assembly for the starboard support frame is not shown, but in this example would be identical. The port central assembly425has aft and forward receivers426,427, each of which receives a lateral support strut417, which is described in more detail with reference toFIG.21. The aft and forward receivers426,427include latching joints428,429that allow a lateral support strut417to be installed in an open position, and then locked into place to provide tension in the aft and forward directions to the port side shade cover405when fully extended outboard. The receivers426,427pivot about the latching joints428,429, and are secured using additional fasteners, locking pins, and/or cables, which may also be used to fully secure lateral support struts to the port central assembly425and to the port side shade cover405when inserted and in the locked position. In some embodiments, the receivers may be secured at an adjustable intermediate angle, to modify the position and angle of the port side shade cover405to provide cover to the seating areas114,115,117at different times of day, as discussed with reference toFIGS.6through9. The port central assembly425for the port support frame410has a central receiver435, which receives an end436of the port center support strut415. The starboard center support strut419has an end437, which is received by an identical central receiver of the starboard frame assembly's center assembly (not shown). The port center support strut415is secured to the central receiver435using rivets, though screws and other suitable mechanisms also can be used. FIG.21shows a detail view of the lateral support strut417for the port support frame410. The port frame assembly may utilize two such struts, one aft and one forward, though only one such strut is shown inFIG.21. Likewise, the starboard frame assembly (not shown) may utilize an additional two such struts. In this example, the aft and forward struts are identical, whereas in other embodiments they may be different, e.g., mirror images, depending on the shape of the port and starboard side shade covers405,406and the dimensions and configuration of the boat100, including other attachments to the tower130such as board racks, tow lines, and other accessories. The lateral support strut417also may include a support strap440, which is attached securely (e.g., with a rivet, a locking pin or button, or other suitable fastener) to the lateral support strut417. The support strap440may be attached to the lateral support strut417at any point along the length of the lateral support strut417, including the opposite end448of the lateral support strut417. The other end of the support strap440may secure the lateral support strut417to the boat100(e.g., to a cleat, a gunwale, the deck, etc.), as shown inFIG.15. In this example, the lateral support strut417is inserted at end447to either one of the aft and forward receivers426,427. If used for the port support frame410, the length of the lateral support strut417may be inserted into a sleeve along the outer edge of the port side shade cover405to fully engage and support the port side shade cover405. Likewise, if used for the starboard support frame, the lateral support strut417may be inserted into a similar sleeve of the starboard side shade cover406to fully engage and provide support. As described above, a side shade cover frame assembly can be modular, composed of multiple components. In other embodiments, the frame assembly is a single, integral assembly, which can include lateral support struts, or alternatively not include lateral support struts. Additional mechanisms for securing lateral support struts to the assembly (whether modular or integral) are contemplated, including threaded ends, screws, latches, and button locks. Although this invention has been described with respect to certain specific exemplary embodiments, many additional modifications and variations will be apparent to those skilled in the art in light of this disclosure. It is, therefore, to be understood that this invention may be practiced otherwise than as specifically described. Thus, the exemplary embodiments of the invention should be considered in all respects to be illustrative and not restrictive, and the scope of the invention to be determined by any claims supportable by this application and the equivalents thereof, rather than by the foregoing description. | 50,554 |
11858592 | Corresponding reference characters indicate corresponding parts throughout the several views. The exemplification set out herein illustrates an exemplary embodiment of the invention and such exemplification is not to be construed as limiting the scope of the invention in any manner. DETAILED DESCRIPTION OF THE DRAWINGS For the purposes of promoting an understanding of the principles of the present disclosure, reference is now made to the embodiments illustrated in the drawings, which are described below. The embodiments disclosed herein are not intended to be exhaustive or limit the present disclosure to the precise form disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings. Therefore, no limitation of the scope of the present disclosure is thereby intended. Corresponding reference characters indicate corresponding parts throughout the several views. The terms “couples”, “coupled”, “coupler” and variations thereof are used to include both arrangements wherein the two or more components are in direct physical contact and arrangements wherein the two or more components are not in direct contact with each other (e.g., the components are “coupled” via at least a third component), but yet still cooperate or interact with each other. In some instances throughout this disclosure and in the claims, numeric terminology, such as first, second, third, and fourth, is used in reference to various components or features. Such use is not intended to denote an ordering of the components or features. Rather, numeric terminology is used to assist the reader in identifying the component or features being referenced and should not be narrowly interpreted as providing a specific order of components or features. Referring first toFIG.1, a pontoon boat10is shown. Pontoon boat10comprises a driving system50(e.g. a motor), a number of pontoons30, a deck20coupled to the pontoons30, and an enclosure system80coupled to the deck20. The enclosure system80generally defines an exterior and interior to the boat10, wherein the enclosure system80encloses a space on the boat10for users to position themselves. The boat interior formed by the enclosure system80may comprise seating, driving mechanisms, tables, flooring, storage space, or any other boat features as are known in the art. Referring toFIGS.1-3, the enclosure system80comprises a rail system100, and a skin300. The rail system100generally defines a frame or skeleton for the enclosure system80around the boat10, and accordingly may also be referred to as a frame or skeletal system. In other embodiments, the enclosure system80and/or rail system100may only enclose a portion of the boat10, and may also be positioned within the interior of the boat10. For example, the bow and stern of the boat10may have fiberglass body panels while the port and starboard sides of the boat10may have the rail system. When positioned in the interior, the enclosure system80or rail system100may divide the boat into separate sections or provide additional partitions within boat10. Rail system100is not limited to standard “rails” as is known in the art, but may comprise any materials or structural elements to provide a framework to the enclosure system80. The rail system100comprises a number of rails102, a number of support rails500, and a number of rail couplers200. Rail couplers200may be rail caps202, rail attachments400, or any other features or devices that may couple to rail system100. Rail couplers200may also be described as rail or exterior features, or rail or exterior connectors. Rails102form the framework of rail system100and provide a primary structure for enclosing the boat10. Rails102may be an upper rail104, a middle rail105(seeFIG.13), a lower rail106, or a support rail500. The support rails500or lower rails102provide structural support to rail system100and may couple components of rail system100to one another. Skin300extends generally between two rails102and forms a wall around or within the boat10. In embodiments, the enclosure includes multiple skin pieces which collectively form a wall around or within the boat10. As shown inFIG.2, skin300may extend between an upper rail104and a lower rail106. Rails102may comprise a single rail extending around the boat10, or of multiple pieces coupled together through welds, adhesives, rivets, staples, or any other suitable coupling devices. As shown inFIG.1, rails102are generally horizontal and run approximately parallel to the deck20of the boat10, but in other embodiments rails102may extend in any direction and may be formed into any shape. For example, rails102may curve towards or away from the deck20to form a more stylized rail system100. Rails102may made from metals, polymers, wood, composites, or any other material to provide desired structural properties for rail system100. In an exemplary embodiment, rails102are formed from an aluminum extrusion. In an exemplary embodiment, rails102are coupled to the boat10through support rails500. In other embodiments, rails102may be coupled directly to boat10(for example, the lower rail102inFIG.1). Support rails500provide structural support to rail system100and may couple any number of support rails102together. Support rails500may be coupled to rails102, deck20, skin300, or other support rails500through welds, adhesives, friction, rivets, screws, staples or any other devices configured to couple with support rails500. Rails102(including support rails500) may also be coupled to deck20through deck fasteners450as described further below. Support rails500may be generally vertically oriented and may extend generally perpendicular to the deck20of boat10, yet in other embodiments, support rails500may extend in any direction and have any shape. For example, support rails500may curve or bend throughout the rail system100and may be angled relative to deck20. Support rails500may be altered in shape or orientation to provide additional support or add stylized features or designs to rail system100. In the illustrated embodiment, rails102comprise an exterior130and an interior110. In other embodiments, rails102may be a solid piece without an open interior110. Furthermore, the rails102as illustrated are generally rectangular in shape, but in other embodiments may have any shape cross-section. Rail exterior130may comprise texture or features such as grooves, bumps, ridges, or any other surface features. Such surface features on the rail exterior130may provide a surface that is more appealing for users to interact with, and may also provide additional grip or adhesion to other components of rail system100. Furthermore, rail exterior130may be coated with various materials to provide additional adhesion, weather/damage resistance, or improved tactile features. Rails102may also comprise a number of coupling features160(SeeFIG.3). Coupling features160may be used to couple a variety of attachments to rails102, including canopies, canvases, colored accents, bumpers, rubber inserts, or other attachments as are known in the art. Further, as described herein, rails102may include one or more exterior pockets which receive accent pieces or accessories. Referring toFIGS.2-3the rail couplers200of the rail system100include a number of rail caps202, which may also be referred to simply as caps202, and a number of rail attachments400. In the illustrated embodiment, caps202comprise a cap exterior230, interface portion250, and at least one interactive member210. Caps202are generally configured to extend around three sides of rails102and to couple to rails102. In the illustrated embodiments, caps202acouple to rails102athrough interactive member210. Interactive member210may be any feature that allows caps202to be coupled to rails102. As illustrated in exemplary caps202a,202b, and202c, interactive member210may be a protrusion that extends from cap202into the interior110of rail102. As shown, the protrusion has a flange-like portion that retains the interactive member210within interior110. The interactive member210may be configured to be flexible or deformable such that the interactive member210may be pushed/squeezed into the interior110of rails102through an opening in rails102, wherein the opening within rails102is generally smaller than the resting state of the interactive member210. Accordingly, the interactive member210may expand to a resting state upon passing through the opening to secure the cap202to the rail102. The interactive member210may be inserted into rails102through a hole, bore, or opening in the rail102, or may be slid into interior110starting from an end of rail102and slid along the length of rail102. Interactive members210may be continuous along the entire length of the cap202, or may be discrete elements located at various points along the cap202. Furthermore, interactive members210may interact with any side of the rail102, and may interact with more than one side in a given embodiment. The interface portion250of cap202is configured to interface, couple, or otherwise engage with the skin300and to couple the skin300to rail102when the caps202are coupled to rails102. Interface portion250may comprise surface features such as bumps, ridges, or other textures to provide additional grip to skin300. In an exemplary embodiment, the interface portion250of cap202is pressed against the skin300by a force caused by the interactive member210being retained within rail102. Furthermore in the exemplary embodiment, interactive members210do not extend through skin300, and only the interface portion250of the caps202couple the skin300to the rails102. This configuration allows for the skin300to be moved by simply removing the caps202from the rail system100. In other embodiments, interactive member210may pass through skin300to further secure skin300to rails102. In an exemplary embodiment, the caps202are composed of an elastomer and may be snapped, stretched, or pulled over/around rails102to couple the caps202to the rails102. Further in the exemplary embodiment, the caps202are made of a resilient material, such that the force caused by retention of interactive member210within rail102causes the cap202to bend slightly, and the resiliency of the cap202material causes a pressure on skin300when the skin300is positioned between the cap202and the rail102. In an exemplary embodiment, the caps202are formed as a polymer extrusion. In other embodiments, caps202are formed as a coextrusion with other polymers or materials to provide additional features on caps202. Caps202may be made of a metal, polymer, composite, wood, or any other suitable material. In the instances where the caps202are not generally flexible, the caps202may be slid into rails102or may feature a joint and/or a locking mechanism to secure the caps202to the rails102. In other embodiments, caps202may comprise a hinge or a living hinge (not shown) which may allow caps202to be bent or otherwise moved relative to the rails102in order to engage or disengage with skin300. In embodiments where the rails102are not generally rectangular in shape, caps202may be configured to match the shape of rails102. The surface230of rail couplers200may comprise various shapes, textures, colors, or features. As illustrated inFIG.3, the surface230of rail cap202bis generally rounded in shape, and may function as a bumper or may provide a user with a more comfortable grip on cap202b. Cap202bis configured to couple with rail102b. The surface230of rail couplers200may be coated with material, such as paint or protective coatings, in order to achieve the desired surface features, or the rail couplers200themselves may be formed with varied surfaces230, for example with a coextrusion process. For example, a multi-color rail coupler200may be produced with a coextrusion process. Furthermore, rail couplers200may be shaped and textured to meet grab rail compliance requirements (e.g. having a minimum/maximum diameter). Rail couplers200may also be embodied as rail attachments400. Rail attachments400may differ from caps202in that rail attachments400may not generally extend around at least three sides of rails102, but may extend around multiple sides of rails102. As shown inFIG.2, rail attachments400may be used to couple skin300to a rail102in situations where a cap202may not be easily slid around rail102(e.g. instances where rail102is coupled directly to the deck20). Skin300may be coupled to rail102through an interfacing portion of rail attachment400in a similar way to cap202, or the skin may be otherwise attached to rail attachment400. Rail attachment400acomprises an interactive member410configured to interact with the rail102to couple rail attachment400to rail102. In the illustrated example, the interactive member410is a protrusion that extends into an interior110of the rail102, similar to the interactive member210of cap202. In an exemplary embodiment, the skin300is composed of sheet metal, and may also comprise coatings, paint, decals, other layers of material, or other surface features. In other embodiments, the skin may be composed of any material suitable to make a wall for the pontoon boat10, including polymers, metals, composites, glass, or wood. In the event that any portion of the skin should be replaced, the rail couplers200may be removed from the rails102or otherwise moved relative to rails102, which releases the skin300. A new skin300may then be positioned against the rails102, and the rail couplers200may be coupled onto rails102, thereby coupling the skin300to the rails102. In this way, the skin300may be added, removed, or replaced without having any impact on the rails102. In other embodiments, an adhesive or a tape may be applied between the skin300and the rails102and/or the rail couplers200. For example, double-sided tape may be positioned on the rails102or rail couplers200before positioning the skin300against the rails102. The tape/adhesive may be configured to provide additional grip or thickness to reduce vibration of the skin300when the boat10is in use. As illustrated inFIG.2, the skin300extends generally between two rails102, illustrated as an upper rail104and a lower rail106. In further illustrated embodiments of rail system100where only one rail102is depicted, it should be understood that the skin300may still extend between two or more rails102. Any combination of disclosed embodiments of rails102, or variations thereof, may be used within rail system100as upper rails104, lower rails106, middle rails105, support rails500, or any a rail in any other position within the frame. Furthermore, any disclosed embodiments of support rails500may be used as rails102. Any features illustrated or otherwise disclosed as being part of an upper rail104, lower rail106, middle rail105, or support rail500may also be included on any other type of rail in rail system100. Referring toFIG.4, the interactive member210of cap202may include a protrusion, configured to couple the cap202to the rail102through an exterior coupling instead of being received in the interior of the rail102. The rail102may comprise an external coupling feature120to interact with interactive member210. In the illustrated embodiment, cap202ccomprises two interactive members210, an interior protrusion configured to couple to rail102by entering the interior110of rail102c, as well as an exterior protrusion configured to couple to rail102cby interfacing with an exterior coupling feature120. The interactive members210may also be described as cap coupling features, as they may assist in coupling cap202to rail102. Similarly, interior features (the walls of interior110), exterior coupling features120, and pocket180(described below) may be described as rail coupling features, as they may also assist in coupling the cap202to the rail102. In other embodiments, cap coupling features and rail coupling features may be any compatible systems for coupling the cap202to the rail102. Examples of coupling features include snaps, buttons, zippers, locks, detents, joints, adhesives, staples, or other common coupling devices as are known in the art. In another example, the rail102may comprise protrusions and the cap202may comprise recesses to receive protrusions. Referring now toFIGS.5-6, the interactive members210of rail coupler200may only comprise exterior protrusions as the cap coupling features, and rail102daccordingly only comprises exterior coupling features120as rail coupling features, as shown in the exemplary cap202d. In the illustrated embodiment, interactive members210are at least partially curved to assist in positioning the interactive members210within exterior coupling features120. In such an embodiment, the cap202dmay be flexed/deformed to stretch around the rail102din order to couple the cap202dto the rail102. Once coupled to the rail102d, cap202dmay be removed by flexing the cap202dto a point at which an end of interactive members210exit the exterior coupling features120. In this way, the cap202dis retained on rail102duntil an outward force is applied. As shown inFIG.5, the entire cap202dmay be shaped such that the ends of cap202dangle inward toward one another. As illustrated, the cap202dcomprises a first width W1and a second width W2, and the rail102dcomprises a third width W3wherein W1and W3are greater than W2. W3may be less than or equal to W1. Such a configuration allows for the resiliency of the material within cap202dto cause the interface portions250to press against the skin300and rail102.FIG.4illustrates only one skin300coupled to rail102, but in other embodiments another skin300may be located on the opposing side of rail102to form a double walled system. The skin300on the opposing side may be coupled to rail102with any combination of interactive members210and interface portions250. Referring toFIGS.7-8, sectional views of support rails500are shown. As mentioned previously, the cross sections and features shown inFIGS.7-8may also be used for rails102, and the sections shown inFIGS.2-5may be used for support rails500. Support rails500may be positioned at various points along rail system100to support rails102, and any other rails or features within rail system100. Support rails500may be made from similar or identical materials and methods as rails102. In the illustrated embodiments, support rails500may be configured to accept rail attachments400or any rail coupler200. Similar to interactive members210, rail attachments400may couple to support rails500by inserting an interactive member410into a support rail interior510. Furthermore, as was the case with cap and rail coupling features, the rail attachment400may be coupled to the support rail500through any appropriate coupling mechanisms. Rail attachment400may also comprise an exterior surface430, which may comprise various colors, accents, paint, coatings, textures, or other external features. Furthermore, rail attachment400may be a bumper or a rubber insert. As exemplified in the figures, rail attachment400amay extend beyond the surface of support rail500a(FIG.6), may be flush with the surface of support rail500bas exemplified in rail attachment400b(FIG.7), or may be recessed relative to the surface of the support rail500. It should be recognized that rails102may comprise similar attachment features/rail connectors to support rails500in order to attach additional attachments to rails102. For example, support rail500bcomprises a support rail coupling feature560configured to couple with external attachments. Rail couplers200such as rail attachments400may be attached, detached, or otherwise moved relative to rails102such as support rails500without disassembling the rails102themselves or the rail system100. Referring toFIG.9, rail102, in this embodiment illustrated as a support rail500b, may be coupled with an attachment device420. Attachment device420may be any device a user or manufacture may desire to attach to rail102. Exemplary attachment devices420include position sensors (sonar, IR, etc.), motion sensors, light sensors, lighting systems, speakers, cameras, mirrors, cup holders, fishing rod holders, coolers, ornamental decorations, recreational devices (e.g. a basketball hoop), extendable tables/countertops, or any other suitable attachment device. In the illustrated embodiment, attachment device420is coupled to the rail500bthrough rail attachment400and interactive member410. In such an embodiment, the attachment device420may be configured to slide along an axis parallel to the rail102to which it is attached. For example, an attachment device420coupled to a top rail104, may be configured to slide generally horizontally along the top rail104, but may be restricted from moving vertically relative to top rail104. In yet other embodiments, attachment device420may be locked into a single position. In still yet other embodiments, attachment device420may be configured to couple to any rail102through any of the coupling devices disclosed herein. Attachment device420may couple to a rail102through a coupling mechanism integral to the attachment device420, or through a rail coupler200. In embodiments, attachment device420may be a “LOCK-N-RIDE” coupler to attach an accessory such as position sensors (sonar, IR, etc.), motion sensors, light sensors, lighting systems, speakers, cameras, mirrors, cup holders, fishing rod holders, coolers, ornamental decorations, recreational devices (e.g. a basketball hoop), extendable tables/countertops to rail system100. Additional details regarding the “LOCK-N-RIDE” coupler are found in U.S. Pat. No. 7,222,582, the disclosure of which is incorporated herein by reference. Rail system100would include opening sized and shaped to cooperate with the “LOCK-N-RIDE” coupler. Referring toFIG.10, cap202emay also comprise attachment features. In the illustrated embodiment, rail cap202ecomprises a cap attachment feature260that is configured to couple with an attachment feature760on a cover700for boat10. Cover700may couple to rail102through cap202eand may be used to provide shade or other forms of cover on boat10. Cap attachment feature260may also be configured to couple with other attachments such as bumpers, facades, colored accents, lighting, etc. Referring toFIG.11, a cross-sectional view of an exemplary version of rail system100is shown. In an illustrated embodiment, the cross-sectional view of the rail system100may be taken through one of lines L1or L2as seen inFIG.1. As shown, the rail system100may comprise an exterior and an interior skin300with each skin coupled to an opposing side of support rails500, or may comprise only a single skin300. The interior and exterior skins300may be composed of different materials, and may have different features such as paints, coatings, textures, shapes, and corrugations. In sections where there is no skin300attached to support rails500, rail couplers200such as rail attachments400(e.g. rubber stoppers/bumpers) may be attached. Rail attachments400may also be included between multiple skins300or over a portion of one skin300. Furthermore, fasteners such as rivets or staples may be used to secure skins300to support rails500. Referring now toFIG.12, yet another embodiment of a rail coupler200and rail102is illustrated. As illustrated, the rail coupler200inFIG.11is rail cap202f, which is similar to rail cap202d, but additionally comprises a transparent or translucent portion275between interface portion250and interactive member210. The rail102fincludes a recess in which an illumination source175is received. The illumination source175is positioned generally next to transparent portion275of rail cap202dand is configured to shine through transparent portion275. In other embodiments, the illumination source175may be positioned on the bottom of a rail102, or may be otherwise angled downward to provide illumination in a downward direction (e.g. courtesy lights) instead of/in addition to illumination in an outward direction. In such embodiments, the rail cap202may extend around the bottom of the rail102to secure the illumination source175to the rail102, and the transparent portion275may be positioned on the bottom face of the rail cap202. Additionally, the cap202may not extend around the bottom of the rail102, and the illumination source175may be otherwise secured to the rail102, such that the illumination source175may shine without passing through a transparent portion275. The illumination source175and transparent portion275may extend along the entirety of rail102fand cap202frespectively, or they may be positioned at discrete points along rail102fand cap202f. In an exemplary embodiment, transparent portion275is a generally clear or transparent material within cap202f, and the illumination source175is an LED strip coupled to the rail102fIn other embodiments, transparent portion275may be an additional element such as glass, an additional transparent polymer, or another form of window that is coupled to cap202fFurthermore, transparent portion275may extend throughout any portion of the cap202fincluding the entirety of cap202f. Such an embodiment would allow a user to see other portions the rail102fincluding other features on the surface of rail102fbeneath the cap202f. In yet other embodiments, transparent portion275may provide visible access to colored portions or accents of rail102finstead of an illumination source175. The illumination source175may be any device configured to emit light, such as lightbulbs or other phosphorescent, fluorescent, or luminescent materials. Furthermore, illumination source175may be movable relative to the rail102fsuch that the illumination source175may be replaced or removed. Illumination source175may also be programmable to shine with different colors, as is known in the art. Illumination source175may be coupled to rail102fthrough adhesives, or by the cap202f. Interior110of rail102fmay comprise wires, power sources, or other electronic components to electrically couple to illumination source175. In embodiments, transparent portion275is coextruded with the remainder of cap202f. Referring toFIG.13, another embodiment of a lower rail106is disclosed. Rail102kis configured to be coupled to the deck20of the boat10through fastener450. In the illustrated embodiment, fastener450is a screw with a head455, the head455configured to interface with an exterior coupling feature120of the rail102k. The fastener450extends from the head455through the interior110of rail102kand into deck20. The fastener450may be coupled to the deck20of the boat through a nut and washer460. In an exemplary embodiment, a number of holes are drilled into the deck20, a rail coupler200and a lower rail106are positioned along the holes, and fasteners450are then used to couple the lower rail106and coupler200to the deck20through the holes. Rail102kalso comprises additional exterior coupling features120to couple with rail couplers200. In other embodiments, fastener450may be a rivet, bolt, nail, staple, or other mechanism configured to couple a rail102to the boat10. Referring now toFIGS.14-15, exemplary sectional views of an enclosure system80are shown, comprising an upper rail104, a middle rail105, and a lower rail106. In the illustrated embodiment, skins300extend between the upper rail104and the middle rail105, as well as between the middle rail105and the lower rail106. Any embodiments of rails102may be used for the upper rail104, middle rail105, and lower rail106. In the illustrated embodiment, lower rail106is coupled to the deck20through fastener450. Furthermore, lower rail106is illustrated as rail102gconfigured to couple with cap202g. Cap202gmay be coupled to rail102gbefore the rail102gis coupled to the deck20through fastener450, and fastener450may extend through cap202g. Cap202gmay be configured similarly to cap202d, but with an additional opening to accommodate passage of fastener450through the cap202g. The skin300may be coupled to the lower rail106by bending the interface portion250of cap202gaway from rail102g. Furthermore, lower rail106may be coupled to deck20before or after the skin300is coupled to lower rail106. FIG.15illustrates a similar embodiment toFIG.13, but with an additional skin300positioned on the interior of enclosure system80. As illustrated, skin300may extend across the entire height of the enclosure system80as a single piece, or may be composed of multiple pieces of skin300. In the illustrated embodiment, the middle rail105is configured to receive rail attachments400on both sides of rail102l. Accordingly, rail102lmay couple to zero, one, or two skins300through rail attachment400. FIGS.16-17illustrate various embodiments of lower rail106. InFIG.15, rail102hcomprises an external rail coupling feature120, which is illustrated as a protrusion. In the exemplary embodiment, cap202hcomprises a number of interactive members210configured to interact with external rail coupling feature120to couple the cap202hto the rail102h. In this embodiment, the fastener450may not pass through the rail coupler200on the lower rail106, and the rail coupler200could be coupled to the rail102after the rail202hwas fastened to the deck20. The cap202hmay be coupled to rail102hby bending or sliding a portion of cap202haround the interactive member210. As shown inFIG.17, lower rail106may also be configured in a similar fashion to rail102l, and may be configured to couple with a number of rail attachments400through interactive members210configured to be protrusions. In this embodiment, rail102mcomprises an external coupling feature120configured to receive the head455of fastener450. Similar to rail102h, rail102mallows for the attachment of rail couplers200to the rail102without interfering with the fastener450. Rail attachments400may be coupled to rail102mbefore or after the rail102mhas been coupled to the deck20through fastener450. Fastener450may also be configured to extend through rail102min a manner similar to rails102hand102k. Referring toFIGS.18-19, other embodiments of rails102are shown. Both rails102iand102jcomprise an exterior pocket180configured to interface with the skin300. Skin300may be inserted into exterior pocket180to couple the skin300to the rail102without a rail coupler200. As shown inFIG.18, skin300may comprise tape or an adhesive350to secure the skin300within the pocket180and prevent movement of skin300within pocket180. The tape350may be single sided tape and may primarily provide additional thickness to the skin300, or the tape350may be double sided tape to provide both thickness and adhesion. Both rails102iand102jcomprise exterior coupling features120which may be configured to couple the rails102iand102jto the deck20through fastener450. Rail couplers200may still be coupled to rails102iand102j. Different sides of rails102may comprise different features, such as external coupling features120or pockets180to couple with skin300or other attachments as needed. Accordingly, any of the features shown in any of the rails102a-mmay be used in combination with any other embodiment of rails102a-m. In embodiments, rail couplers are also included to further secure skin(s)300to rails102iand102j, to provide protection to rails or skins, and/or to provide accent color or lighting features to boat10. Referring toFIGS.20-21, methods for adding a skin300to an enclosure system80of a boat10are shown. Addition method1100discloses the steps of providing a frame1100, placing skin against the frame1120, and then attaching a rail coupler to the frame1130. The frame of step1100may be the rail system100, otherwise referred to as a frame or skeletal system as disclosed above. The skin300is then positioned against the frame100along at least one side of the rails102of the frame100, and then a rail coupler200is coupled/attached to the frame100. When the rail coupler200is attached to the rails102of frame100, the skin300is positioned between the rail coupler200and the rail102as discussed above and shown in the illustrated embodiments. In an exemplary embodiment, the rail coupler200is movable relative to the frame100and accordingly may be removed or attached to allow for the decoupling or coupling respectively of skin300to frame100without dismantling or moving the frame100relative to the boat10. A skin300may be removed and replaced through replacement method1200. Replacement method1200comprises the steps of moving a rail coupler1210, removing a first skin1220, positioning a new skin1230, and moving a rail coupler1240. In this process, a rail coupler200is first moved relative to the frame100to allow for the removal of the skin300. The moving of rail coupler200may comprise the steps of decoupling the rail coupler200from the rail102, or otherwise bending or moving the rail coupler200away from the rail102to allow for the removal of skin300. In some embodiments, the rail coupler200may even be broken, in which case a new rail coupler200would be used in step1240. Once the first skin300is removed, a new skin300may be positioned along the frame100, and a rail coupler200may be coupled to the frame100to couple the new skin300to the frame100. The rail coupler200that is attached may be a new rail coupler200or the original rail coupler200from the first step1210. Referring toFIGS.22-25, various embodiments of a pontoon boat10are disclosed with multiple different colored accent features. Since caps202and rail attachments400may be added or removed without dismantling the rail system100, users have a large degree of customizability regarding the exterior of the boat10. Colored accents900may be added onto the boat10as part of the cap202, or as an attachment similar to rail attachments400. InFIGS.10and12, colored accent900extends generally along the top rail102. InFIGS.11and13, colored accent900extends generally along top rail102and bottom rail103. Pontoon boat10also may comprise additional horizontal rails102which may be configured to receive attachments in a similar manner to rail attachments400. While this invention has been described as having exemplary designs, the present invention can be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. | 34,519 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.