Fully autonomous vehicles are still rarely encountered on public roads, although autonomous driving has long since ceased to be mere dreams of the future. The legal framework for fully autonomous vehicles is in place: the “Automated Driving Act”, which has already been in force in Germany for level 3 vehicles since 2017, was extended to include level 4 autonomous vehicles in July 2021 by the German Government. Autonomous vehicles are classified in several levels: Levels 0-2 denote the classic self-driving vehicle, which in some cases already receives automated support from the vehicle (from Level 2), for example, through a lane departure warning system. Level 3 marks the beginning of highly automated driving, in which the vehicle can already take on certain tasks (such as highway driving) independently. In level 4, driving is fully automated and an active driver is no longer necessary. The vehicle then becomes fully autonomous in stage 5.

The slow progress in the entry of autonomous vehicles into road traffic is due both to the complexity of the artificial intelligence used in these vehicles and to the high safety requirements they must meet. In order to sufficiently and reliably test software systems of autonomous vehicles, conventional test methods such as test drives are no longer sufficient: They take too long and are too imprecise. 

Franz Wotawa from the Institute for Software Technology at TU Graz, explains: 

“Autonomous vehicles would have to be driven around 200 million kilometers to prove their reliability – especially for accident scenarios. That’s 10,000 times more test kilometers than are required for conventional cars.” 

Wotawa’s research team, which is addressing the safety assurance challenges required for autonomous vehicles, has explored a more efficient testing approach: simulating driving environments using ontologies. 

Ontologies for the automatic generation of test scenarios 

In Artificial Intelligence, ontologies represent knowledge bases that provide intelligent information systems with relevant information about a specific application domain, on the basis of which they make decisions. This knowledge includes, among other things, entities, i.e., uniquely definable and delimitable units, their behaviors and relationships to each other. Rules and constraints may also be explicitly defined. Transferred to the field of autonomous vehicles, ontologies thus enable intelligent vehicles to understand their driving environment, which is essential for the predictive and risk-minimizing behavior of vehicles in traffic. To this end, the ontologies are fed, for example, with information about the structure of roads, about road users or traffic control elements such as traffic lights. Based on this information, algorithms can generate a multiple of simulations to test the behavior of autonomous vehicles in these scenarios. 

Ontology-based testing is faster and more reliable 

With the help of ontologies, not only can countless simulation scenarios be generated and tested in a very short time, but also those that are very difficult to reproduce or that humans themselves do not even think of. In a generated test scenario around Wotawa’s team, for example, it was determined that a brake assistance system had not simultaneously detected people approaching the vehicle from different directions and initiated a braking maneuver that would have resulted in one person being injured. 

“We have uncovered serious weaknesses in automated driving functions in initial experimental tests. Without these automatically generated simulation scenarios, the weak points would not have been identified so quickly: 9 out of 319 test cases examined resulted in accidents.“ (Franz Wotawa)

Thus, with an ontology-based approach, security vulnerabilities of autonomous vehicles can be detected and patched faster. 

Deception of autonomous vehicles

Meanwhile, an example from the U.S. shows a very different risk of autonomous systems: Researchers modified a traffic sign for a speed limit of 80km/h using a patch pattern so that an intelligent sign recognition system would interpret sign as a stop sign. In a public road scenario, the autonomous vehicle would stop abruptly and possibly cause a rear-end collision. The researchers tested several such examples first in a simulation and then in real driving environments. In 90% of the test cases, the traffic signs were actually misinterpreted. Even the smallest changes in the environment can therefore lead to misinterpretations on the part of autonomous systems.

Verifiability as a further prerequisite for the safety of autonomous vehicles

Researchers agree that autonomous vehicles urgently need to be trained to deal with such “manipulations,” whether they are intentional or not. After all, even after autonomous vehicles become established on the road, they will need to continue to learn. Ontology-based simulations should also consider such risky scenarios and verify that autonomous systems are able to correct themselves and make the right decision despite unknown changes in the environment. The German Federal Office for Information Security wants to advocate harmonized guidelines that not only define standards for the development of autonomous vehicles, but also for their verifiability so that their behavior can be tracked. But what is the current legal situation regarding the use of AI in the EU?

EU guidelines strive for excellence and trust 

In its Feb. 19, 2020, White Paper on AI, the European Commission committed to promoting the adoption of AI and formulating uniform guidelines for AI-based applications, taking into account safety-critical and ethical aspects. Meanwhile, the Commission has presented a proposal for the world’s first regulatory framework for AI. The regulation proposes to classify AI systems into four risk groups: minimal, low, high and unacceptable. 

Autonomous vehicles that make decisions about people’s lives in critical cases are classified as high-risk AI systems. These systems are subject to particularly strict requirements with regard to their development and their documentation:

  • A high quality of the data records fed into the system is required to keep risks as low as possible.
  • Operations must be logged to enable traceability of results.
  • Detailed documentation is a prerequisite for assessing the system’s compliance. 
  • Adequate risk assessment and mitigation systems must be in place.
  • Clear and adequate information must be provided to the user.
  • The system should be under adequate human supervision to minimize risks. 
  • A high level of robustness, security, and accuracy must be provided. 

It is currently unclear when the regulation will actually come into force. Currently, it is being discussed in the European Parliament and the European Council.  

Magility’s view on the challenges of using ontologies in autonomous vehicles and their testing

Ontology-based testing, due to its ability to automatically generate a wide variety of traffic scenarios, offers a promising approach that could finally accelerate the safe deployment of autonomous vehicles on the road. However, as long as no harmonized regulations apply, manufacturers will find it difficult to ensure the long-term conformity of their vehicles. Legislators are lagging behind the rapidly advancing development and thus the constantly added new possibilities and technologies of AI in automotive applications. Directives must not only be put into force as quickly as possible, but must also be evaluated at appropriate intervals to ensure that they are up to date. In the case of the regulation presented so far, the European Commission envisages the first review only after three years – the need for new urgent safety standards could already arise at much shorter intervals.

Meanwhile, society is still skeptical about fully autonomous driving. A recent study showed that US (19%), German (18%), French and British (17%) customers are the least enthusiastic when it comes to the adoption of autonomous driving. 

„Any technological innovation can only be as successful as the social acceptance behind it.“
(Dr. Nari Kahle, Young Global Leader and Head of Strategic Programs at Volkswagen’s software company Cariad)

The extent to which tests based on ontologies can positively influence the lack of trust in autonomous vehicles by society in terms of their potential for guaranteed safety of autonomous vehicles or the EU legal framework remains exciting. Manufacturers will ultimately have to prove that autonomous driving is safer than humans at the wheel and that the number of road accidents can be increasingly reduced by autonomous driving. 

At magility, we continue to closely follow the progress of autonomous mobility and keep you up to date on the latest developments. Feel free to contact our experts at magility for an exchange on this topic!

At a glance

  • Ontologies offer an innovative approach to test the safety of autonomous vehicles compared to conventional test methods. 
  • Based on information such as road conditions, road users and traffic lights, ontologies can describe the driving environments of autonomous vehicles.
  • With ontologies, more test scenarios can be generated faster. Rare accident scenarios that are difficult to reproduce can also be run more reliably. 
  • The European Commission is committed to promoting the adoption of AI and ensuring an excellent level of safety and trust in its use. 
  • A Commission draft of the world’s first regulatory framework for AI is available. Autonomous vehicles are considered high-risk systems and are subject to special requirements regarding development and documentation. 
  • Creating social trust in autonomous vehicles is becoming an additional challenge for manufacturers.