Since the publication of ISO26262 in November 2011, when all vehicle standards prioritized vehicle and passenger safety over performance, vehicles have been equipped with various electronic devices to continuously become smarter.
As these devices now go beyond vehicle and driver protection to driver convenience, the automotive OEM industry's technological development for autonomous vehicles is accelerating.
In Europe, the three major automobile OEMs BMW, Benz, and Audi have already announced a plan to develop a joint autonomous vehicle system involving 1,200 people, and in Korea, the Ministry of Trade, Industry and Energy and the Ministry of Science and ICT are joining hands to invest in the development of autonomous driving source and base technologies, so it seems that the development of autonomous vehicle technology by automobile OEMs will accelerate beyond the Level 3 stage that has already been achieved and toward Level 4.

Autonomous driving step-by-step images
However, as the level of autonomous vehicles increases, safety accidents are occurring due to misuse of autonomous driving functions or unexpected third variables that go beyond the vehicle's own protective functions.
In 2018, in the United States, a driver was caught on CCTV footage sleeping in a Level 3 autopilot vehicle with the seat reclined. Level 3 is the autopilot level where the driver must be aware of the outside world. In other words, if an accident occurs while the seat is reclined, the driver is responsible because the driver cannot perceive the external environment of the vehicle.

Tesla Autopilot feature allows driver to sleep while driving on the highway at 120 km/h
While ISO26262 is focused on identifying safety issues in advance, such as temporary failures such as vehicle system failures and SW HW design bugs, SOTIF ISO21448 focuses on dealing with performance limits of intended safety functions or unintended operation systems without failures, and predictable user misuse.
In this e4ds News autonomous driving special interview, we interviewed Minhyuk Son, Manager of Siemens, who presented the “Simulation Platform for Autonomous Driving Development and Verification” at Automotive Innovation Day last July, about the SOTIF 21448 standard.
1. Siemens has the Prescan solution, an autonomous driving development verification simulation platform. Could you please give an introduction to this solution and how it complies with the SOTIF standard?
Siemens' recently announced simulation platform for autonomous driving is based on software called Simcenter Prescan.
Siemens acquired/merged TASS International, headquartered in Helmond, Netherlands, in April 2018, and Prescan is a software for ADAS and autonomous driving developed by TASS.
Prescan is named after Pre-crash Scenario Anaysis and is a SW that can cover everything from concept design to development and verification of various ADAS systems.
This is a SW for creating and analyzing the simulation environment we need as a 3D virtual model by quantitatively setting the road and traffic infrastructure, vehicle behavior, vehicle sensors and communication systems to detect the surroundings, and weather and environment required to simulate various ADAS and autonomous driving-related situations.
The ISO 21448:2019-01 version for vehicle safety is intended to apply to the intended functions derived from safety systems at SAE Level 1 and 2 autonomous driving (e.g. LDW, AEBS, etc.).
Higher levels of autonomous driving may be considered in the future, but are not covered in the current version. Simcenter Prescan already provides scenarios for sensors, basic algorithms, and protocols such as EuroNCAP, ISO, and NHTSA that can be used to evaluate ADAS systems at SAE Level 1 and 2.
Of course, simulations for higher levels of autonomy are possible, and this will depend on the user's scope and intended use.
2. I understand that the technical content of SOTIF required for autonomous driving levels 3-5 was discussed at SAE in March this year. Various topics were discussed, from machine learning to HD MAP validation. Are there any new topics discussed that were applied to Siemens simulation solutions?
I don't know exactly what was covered at SAE in March this year. Regarding the application of machine learning and HD Map to what you asked about, the German Research Center for Artificial Intelligence (DFKI) and TASS conducted joint research on deep learning in relation to Prescan software, and created synthetic images using Prescan and used them for learning using Convolution Neural Network, and the research results on this were presented at several conferences in 2017.
This is the first use case of Prescan for machine learning, and since then, many research institutes and companies have also used Prescan for machine learning. Since last year, domestic Tier 1 companies have also been using Prescan to generate a large amount of training data for artificial intelligence learning of autonomous driving cameras.
Additionally, high-precision maps are an essential element for autonomous driving. In Europe, research and development is actively underway to connect high-precision maps and satellite navigation systems with autonomous driving, centered around Here, a company headquartered in Amsterdam, the Netherlands. Siemens, in collaboration with these companies, will soon release an HD Map plugin that will allow HD Maps based on Here's high-precision map data to be imported into the Prescan simulation environment.
In addition, importing maps and scenarios based on open formats is necessary for autonomous driving simulation, and Prescan supports the latest version of OpenDrive, a precision map format widely used in simulation environments.
3. If ISO26262 is a standard for vehicles, SOTIF 21448 seems to be a simulation to fill in the lack of experience in autonomous driving. I understand that various accident cases are collected and applied for this purpose. Please tell me how many cases have been applied and also introduce the accident cases that have been applied.
Application to actual accident cases will vary depending on the customer's usage. The simulation environment does not provide examples based on real-world incidents.
However, you can easily reproduce actual accident scenarios in Prescan by reading data from Germany's GIDAS (German In-Depth Accident Study) or China's CIDAS, which collected actual accident data, through a plugin.
These real-world accident analyses are of great help not only for learning autonomous driving algorithms but also for testing autonomous driving performance.
4. Various simulations are necessary for autonomous driving, but what simulations are essential and how are they being simulated?
Perhaps most importantly, I think the key is how accurately the virtual environment is physically implemented.
The type of simulation to be conducted can vary depending on the complex and diverse requirements and stages of autonomous driving development and verification, and the reliability of the simulation performance and results can be increased depending on whether the simulation can be similarly simulated to reality.
It must be able to accurately express the physical characteristics of the sensor model, reflect the vehicle dynamics characteristics that simulate the vehicle's behavior and the relationship between the tire and the road surface, and physically reproduce the various road infrastructure, buildings, pedestrians, and non-target vehicle behaviors that are necessary to compose the driving environment.
Prescan can build an environment without theoretical limitations on the number of simulation targets (although it is subject to limitations on computer specifications for simulation), and can simulate by scalably adjusting it as needed.
Ultimately, what matters in simulation is how accurately or quickly we can implement what we need to the level we need. It seems important to be able to do so.
5. It seems that accident cases will become variables in autonomous vehicle simulations. It seems that the Level 3 vehicles currently being produced are not yet fully applied to reality. To what extent is Siemens' solution prepared to simulate?
If the autonomous driving algorithm supports it, it is possible to create an environment where simulation up to level 5 is possible. If there is a case where an autonomous driving algorithm up to level 5 has been implemented, there seems to be no reason why it cannot be applied to the Prescan simulation platform.
6. I heard that around 20 sensor models were applied to the simulation during the presentation. What criteria were used to select and apply the 20 or so sensors currently in use, and can we find out information about the sensors that were applied? Also, new sensors are expected to continue to be released in the future. Will they be applied continuously?
Sensors for ADAS and autonomous driving mounted on actual vehicles will include ultrasonic, radar, lidar, camera, and IMU sensors.
The 20 or so sensors provided by Prescan that I introduced include various sensor models that can be implemented in simulations in addition to these actual sensors.
For example, by mathematically calculating the image captured using a stereo camera, you can calculate the distance to an object that cannot be determined with a single camera.
At this time, by using the Depth camera in the simulation, you can tell the distance value of the object detected in the simulation environment, so you can compare the correct answer with the distance prediction value using the stereo camera.
There are too many sensors to list in detail, so if we broadly categorize them, they are two types of Ideal sensors, six types of Detailed sensors, seven types of Ground Truth sensors, two types of other sensors, and physics-based radars, cameras, lidars, and V2X communication sensors.
The type and accuracy of sensor models are of the utmost importance, so Siemens continues to expand its sensor models through continuous research and development and collaboration with sensor companies. More will be added in the future.
7. In addition, if you have any other opinions for more efficient autonomous driving development and verification from the perspective of automobile manufacturers, please let us know.
The development and verification of autonomous driving is considered too huge a mountain to approach with existing processes. It is difficult to estimate how much time and effort it will take to overcome this mountain using the conventional product development approach.
This is because collaboration between each department is important, continuous and flexible comparative verification between simulations and actual tests must be performed, and continuous data collection and analysis must be performed even after the actual product is completed to complete autonomous driving.
If we can build everything from physics-based sensor simulation to autonomous driving semiconductor SoC verification and actual ADAS test certification testing using a single platform, it will certainly be a great help in autonomous driving development.
Siemens is the only company that can design and produce semiconductor chips required for autonomous driving, from the concept of From Chip to City to the mobility services required for operating autonomous cities, as well as software, platforms, cloud services, and transportation design. We have been steadily acquiring and investing in companies to achieve this goal, and we are still doing so.