We began work on an intriguing new project, D-RISK, at the end of last year, leveraging our expertise in smart mobility. As a result, I thought we’d provide some specifics about the initiative today: why it’s critical and what we’ll contribute.

A knowledge graph illustrating a use case taxonomy (from project partner dRISK.ai LTD, 3 patents issued, 2 pending) See above Image

Last year, autonomous vehicles entered Gartner’s “trough of disillusionment” stage of their hype cycle, as predictions of imminent full autonomy began to fall flat and the world realised we are still a long way from widespread adoption.

And what precipitated this adjustment of expectations? Probably a combination of factors, but one clear factor was the realisation of the magnitude of testing required to assure that a self-driving car can deal with any condition encountered on the road. We are not referring to the fairly predictable hazards or scenarios – cyclists, children running into the road, etc. – that comprise 99 percent of our “everyday” driving experience, but also to the unexpected, rare-to-occur, and complex circumstances that comprise the remaining 1% of driving experience.

 

Autonomous Driving’s Edge Cases

These unusual or difficult conditions are frequently referred to as “edge cases,” and while they are extremely rare on their own, there are potentially millions of them, and collectively, they account for the majority of the risk associated with autonomous driving. As humans, we are reasonably adept at dealing with problems, as we can utilise context to assess circumstances and react in predictable ways. If you envision a giant animal going out in front of you, for example, you may not know if it is a camel or a tapir, but you will recognise the need to halt or manoeuvre around it. An autonomous vehicle (particularly one that relies heavily on cameras to see the world) would first need to process a large number of dark pixels, recognise that this indicates the presence of an object ahead, then interpret that the object is a large animal and not, say, a poster on the back of a bus, before deciding on what to do. Once an autonomous automobile understands such a circumstance, it will be able to react considerably faster (in some cases faster than humans), but it will always be limited to the “learned” edge cases. Thus, to effectively transform autonomy from hype to reality, we must develop a mechanism to manage a broad range of edge circumstances.

That, in a nutshell, is the purpose of D-RISK. The project is focused on developing a taxonomy of edge cases and, more importantly, on ensuring that autonomous cars are capable of safely responding to and managing these situations through the use of simulation.

We seek to construct the world’s largest driving scenario library, combining a massive amount of data from a range of various data sources, in collaboration with our consortium partners dRISK.ai, Claytex, Imperial College London, and Transport for London. These disparate sources of information will be compiled and processed before being included in a comprehensive knowledge graph encompassing all autonomous vehicle danger scenarios.

Representative test cases can be supplied into one of the various test environments, both real and virtual, to directly evaluate the vehicle control system.

The ultimate goal of this effort is to maximise the safety of autonomous vehicles on our roads by not just identifying edge cases, but also by knowing what matters most to the public. Alternatively, we are developing the ultimate driving test for self-driving automobiles.

 

Integrating Humans

So how does DG Cities fit into this picture? As always, we believe that technology should be created and implemented with an eye toward how it fits into the community it is intended to serve. This is critical when it comes to autonomous vehicles, as their general deployment will always be limited unless people trust them to be safe and competent to manage our roadways.

That is why we were eager to incorporate crowdsourced human data into the development of the D-RISK taxonomy, in addition to the numerous other data sources and extensive research being conducted throughout the project. As a result, our involvement will be vital community engagement, including speaking with local individuals at focus groups and conducting interviews with road users. Inquiring about the conditions people meet while driving or crossing intersections, their concerns regarding self-driving cars, and perhaps the more strange things they have encountered on the road. Remember, we’re not looking for the “yeah, you always see this” scenarios here, but for the “you’ll never believe what I once witnessed” ones. Thus, we believe that humans and our potential for remembering and communicating tales might give rich and varied inputs that complement the project’s numerous other data points.

These human-data inputs will interpret and validate other data sources, contributing to not only the scenario library we are developing but also to the “training” of autonomous vehicles. We will keep citizens informed about how we are putting autonomous vehicles through their paces and ensuring they are capable of handling the circumstances they have indicated. Human inputs will be used to design the ultimate CAV service for our roads in this manner. We believe this is the first time such an approach has been taken, and we are pleased about not only the volume of data we will collect but also the gains we will be able to make in establishing trust between humans and machines.

D-RISK is shaping up to be a fascinating project, and we’re delighted to be collaborating with another group of best-in-class partners to accelerate the realisation of full autonomy on our roads significantly.