Apr 30, 2020 8:50:30 PM
A future with driverless vehicle is a prediction which has been made for quite some time. What is surprising though is that despite many bold claims by industry experts and futurists stating autonomous vehicles will be on our roads by 2020, it still seems decades away. There are still some key technological challenges which are holding us back from achieving this feat. Nevertheless, we do have leading companies like Waymo and Tesla among many others who are investing heavily towards realizing this dream.
The driving force of the Automotive industry to move towards Automation is the desire to curb increasing road accidents, caused by an influx of vehicles on the roads. On a global scale, road traffic accidents are the 3rd highest leading cause of death, with 94% of these accidents caused due to human error. It is also estimated that the global autonomous car market will generate revenue worth $54.23 billion in year 2019-20. and is projected to reach $556.67 billion by 2026 at CAGR of 39.47% during the forecast period*. All this has led to a surge in R&D activities in the Automotive industry to achieve the fully Autonomous car.
ADAS Systems and Different Sensors
Current vehicles with ADAS capabilities come equipped with various sensors like cameras, radar, and LIDAR which brings us one step closer to a fully autonomous vehicle. These cars with traditional ADAS systems installed have their sensors working in silos. This means they all work independently towards their individual purpose without relaying information to each other .
Cameras help in detecting road signs and ensuring vehicles are in lane while being cost efficient. However, their weakness is lack of visibility during poor weather conditions like snow, rain, fog and a lack of depth perception.
Radar is a highly accurate method to measure the distance and speed of objects on the road. However, these sensors have a limited field of view and provide lower resolution than that of a camera. In the events of snow and misty weather, millimetre wave radar’s efficiency are also diminished in the form of attenuation.
LIDAR on the other hand comes with high range, accuracy and density. These sensors are continuous rotating systems mounted on top of a vehicle, sending thousands of lasers pulses every second. The outcome is an animated 3D representation of a rapidly updating point cloud, assisting in determining the vehicle’s position with respect to objects surrounding it. They are also reliable to a large extent in various lighting and weather conditions such as mild rain, fog and snow. The main disadvantage of LIDAR is that they are very costly with each unit costing nearly $75,000. Also due to its constantly moving parts, the system is susceptible to mechanical faults and damage.
The pros and cons of these sensors have even led to different approach adapted by Waymo and Tesla in their implementation of autonomous cars. Tesla implements only cameras and radar with a neural network for their autonomous cars; whereas Waymo uses only LIDAR technology for their vehicles.
Sensor Fusion for Autonomous Vehicles
The individual shortcomings of each sensor types can be overcome by adopting sensor fusion. Sensor fusion receives inputs from different sensors and processes the information via a computer to perceive the environment more accurately as it is mathematically proven that the noise variance of the fused sensor is smaller than all the variances of the individual sensor measurements. The use of different sensors together additionally offers a certain level of redundancy to environmental conditions that could make sensor of one type fail. This fusion of multiple sensors reduces false negatives and false positives with an overall increase in the performance and reliability as a single system rather than the sum of individual sensors.
One crucial aspect of an autonomous vehicles is path planning. Sensor fusion plays an essential role here by integrating the sensor readings to construct a precise picture of the state of the vehicle and predict the trajectories of the surrounding objects.
In terms of where data is processed, sensor fusion has two viable approaches:
1. Centralized Processing
In centralized processing all decisions and processing are handled by the central processing unit after data is fed into it from all the sensors.
Advantages:
Disadvantages:
2. Distributed Processing
In a fully distributed processing system, the data is processed at the sensor level with only metadata sent back to a central fusion ECU.
Advantages:
Disadvantages:
Sensor fusion technology is still in its early stage with companies working towards attaining cost optimization and scalability for autonomous systems addressing the needs of mid-level and low-end models.
Sasken has been enabling its Tier 1 and OEM partners to build such Sensor fusion solutions leveraging expertise in Autoware. We have also successfully developed a concept to production ready ADAS solution integrating mmWave radars and cameras for one of our Tier 1 automotive customer. By envisioning a future with completely autonomous vehicles on roads, we are focusing and investing towards accelerating our partners’ goals in achieving a Level 5 autonomous vehicular future.
Learn more about our expertise in the automotive segment enabling an autonomous future.
Source: Allied Market Research Report-Autonomous Vehicle Market
Apr 30, 2020 8:50:30 PM
A future with driverless vehicle is a prediction which has been made for quite some time. What is surprising though is that despite many bold claims by industry experts and futurists stating autonomous vehicles will be on our roads by 2020, it still seems decades away. There are still some key technological challenges which are holding us back from achieving this feat. Nevertheless, we do have leading companies like Waymo and Tesla among many others who are investing heavily towards realizing this dream.
The driving force of the Automotive industry to move towards Automation is the desire to curb increasing road accidents, caused by an influx of vehicles on the roads. On a global scale, road traffic accidents are the 3rd highest leading cause of death, with 94% of these accidents caused due to human error. It is also estimated that the global autonomous car market will generate revenue worth $54.23 billion in year 2019-20. and is projected to reach $556.67 billion by 2026 at CAGR of 39.47% during the forecast period*. All this has led to a surge in R&D activities in the Automotive industry to achieve the fully Autonomous car.
ADAS Systems and Different Sensors
Current vehicles with ADAS capabilities come equipped with various sensors like cameras, radar, and LIDAR which brings us one step closer to a fully autonomous vehicle. These cars with traditional ADAS systems installed have their sensors working in silos. This means they all work independently towards their individual purpose without relaying information to each other .
Cameras help in detecting road signs and ensuring vehicles are in lane while being cost efficient. However, their weakness is lack of visibility during poor weather conditions like snow, rain, fog and a lack of depth perception.
Radar is a highly accurate method to measure the distance and speed of objects on the road. However, these sensors have a limited field of view and provide lower resolution than that of a camera. In the events of snow and misty weather, millimetre wave radar’s efficiency are also diminished in the form of attenuation.
LIDAR on the other hand comes with high range, accuracy and density. These sensors are continuous rotating systems mounted on top of a vehicle, sending thousands of lasers pulses every second. The outcome is an animated 3D representation of a rapidly updating point cloud, assisting in determining the vehicle’s position with respect to objects surrounding it. They are also reliable to a large extent in various lighting and weather conditions such as mild rain, fog and snow. The main disadvantage of LIDAR is that they are very costly with each unit costing nearly $75,000. Also due to its constantly moving parts, the system is susceptible to mechanical faults and damage.
The pros and cons of these sensors have even led to different approach adapted by Waymo and Tesla in their implementation of autonomous cars. Tesla implements only cameras and radar with a neural network for their autonomous cars; whereas Waymo uses only LIDAR technology for their vehicles.
Sensor Fusion for Autonomous Vehicles
The individual shortcomings of each sensor types can be overcome by adopting sensor fusion. Sensor fusion receives inputs from different sensors and processes the information via a computer to perceive the environment more accurately as it is mathematically proven that the noise variance of the fused sensor is smaller than all the variances of the individual sensor measurements. The use of different sensors together additionally offers a certain level of redundancy to environmental conditions that could make sensor of one type fail. This fusion of multiple sensors reduces false negatives and false positives with an overall increase in the performance and reliability as a single system rather than the sum of individual sensors.
One crucial aspect of an autonomous vehicles is path planning. Sensor fusion plays an essential role here by integrating the sensor readings to construct a precise picture of the state of the vehicle and predict the trajectories of the surrounding objects.
In terms of where data is processed, sensor fusion has two viable approaches:
1. Centralized Processing
In centralized processing all decisions and processing are handled by the central processing unit after data is fed into it from all the sensors.
Advantages:
Disadvantages:
2. Distributed Processing
In a fully distributed processing system, the data is processed at the sensor level with only metadata sent back to a central fusion ECU.
Advantages:
Disadvantages:
Sensor fusion technology is still in its early stage with companies working towards attaining cost optimization and scalability for autonomous systems addressing the needs of mid-level and low-end models.
Sasken has been enabling its Tier 1 and OEM partners to build such Sensor fusion solutions leveraging expertise in Autoware. We have also successfully developed a concept to production ready ADAS solution integrating mmWave radars and cameras for one of our Tier 1 automotive customer. By envisioning a future with completely autonomous vehicles on roads, we are focusing and investing towards accelerating our partners’ goals in achieving a Level 5 autonomous vehicular future.
Learn more about our expertise in the automotive segment enabling an autonomous future.
Source: Allied Market Research Report-Autonomous Vehicle Market
Sasken is a specialist in Product Engineering and Digital Transformation providing concept-to-market, chip-to-cognition R&D services to global leaders in Semiconductor, Automotive, Industrials, Consumer Electronics, Enterprise Devices, SatCom, and Transportation industries.
Sasken Technologies Ltd
(formerly Sasken Communication Technologies Ltd)
139/25, Ring Road, Domlur, Bengaluru 560071, India
CIN# L72100KA1989PLC014226