• Category
  • >Artificial Intelligence

Confidential Computing in AI Autonomous Vehicles

  • Ashesh Anand
  • Oct 18, 2021
Confidential Computing in AI Autonomous Vehicles title banner

Data privacy has become increasingly critical as more businesses rely on public and hybrid cloud services. And one of the main goals of secret computing is to give corporations and leaders more comfort that their data in the cloud is safe and secure. 

 

This is also to encourage the use of public cloud services for sensitive data and computing workloads. The benefit of private cloud computing is that it eliminates the remaining data security vulnerability by encrypting data in use while it is being processed.

 

 

What is Confidential Computing?

 

Confidential computing is a cloud computing technique that encrypts sensitive data and processes it in a secure CPU enclave. The data being processed and the procedures used to handle it are only available to approved programming code, and the contents of the enclave are invisible and unknown to anything or anyone else, even the cloud provider.

 

(Also Read: Best Data Security Practices )

 

Data privacy in the cloud is becoming increasingly important as business executives rely more and more on public and hybrid cloud services. The fundamental objective of confidential computing is to give executives more confidence that their data on the cloud is safe and secure, encouraging them to shift more sensitive data and computing workloads to public cloud services.

 

Watch this video on “What is Confidential Computing” from IBM:



Confidential computing can be used to protect data and extend cloud benefits to sensitive workloads, as well as to protect intellectual property, collaborate safely with partners on new cloud solutions, reduce concerns about cloud provider selection, and protect data processes at the edge.

 

(Must Read: Information Security vs Cyber Security )

 

 

Different Levels Of Self-Driving Cars

 

To be clear, genuine self-driving vehicles are those in which the AI drives the car fully on its own, with no human help during the driving process.

 

These self-driving cars are classified as Levels 4 and 5, whereas a car that requires a human driver to share the driving effort is classified as Level 2 or 3. Cars that share the driving duty are referred to as semi-autonomous, and they generally have a range of automatic add-ons known as ADAS (Advanced Driver-Assistance Systems).

 

( Also Checkout: Top Self Driving Car Companies )

 

At Level 5, there is no genuine self-driving car, and we don't even know if it will be viable, let alone how long it will take to get there. 

 

Meanwhile, the Level 4 attempts are gradually gaining traction by undertaking extremely limited and selected public roadway trials, while there is debate over whether such testing should be permitted in the first place (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).

 

Listen to these podcasts on Self Driving Cars by Lance Eliot

 

Because semi-autonomous cars require a human driver, their acceptance will be similar to that of conventional vehicles, thus there isn't anything new to say about them on this issue (though, as you'll see in a minute, the arguments stated next are universally relevant). 

 

For semi-autonomous cars, it's critical that the public be warned about a troubling trend that's recently emerged: despite human drivers posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we must all avoid being misled into believing that the driver can take their attention away from the driving task while driving a semi-autonomous car.

 

( Must Read: Tesla's Manufacturing Revolution )

 

You are accountable for the car's driving behaviours, regardless of how much automation is thrown into a Level 2 or Level 3 vehicle.

 

 

Confidential Computing, AI, and Self-Driving Cars

 

In real self-driving cars at Levels 4 and 5, there will be no human driver engaged in the driving duty. All of the people will be passengers, and the AI will be driving.

 

One point worth mentioning right away is that the AI used in today's AI driving systems is not sentient. To put it another way, AI is a collection of computer-based programming and algorithms that are unable to reason in the same way that humans can.


A self drive autonomous vehicle can be operated using Confidential Computing.

A Self Driving Autonomous vehicle


 

Why is it so important that the AI isn't self aware?

 

Because we want to emphasise that when we talk about the AI driving system's role, we're not ascribing human traits to it. Please be warned that there is a pervasive and worrisome trend to anthropomorphize AI these days. In essence, people are ascribing human-like consciousness to today's AI, despite the reality that no such AI exists at this time.

 

( Must Read: Applications of AR in the Automotive industry )

 

With that explanation, it's easy to see the AI driving system not "knowing" about the many aspects of driving. Driving and everything it includes will have to be programmed into the self-driving car's hardware and software. 

 

1. One overarching aspect that deserves specific emphasis is that any AI system, particularly those running in the cloud, should be able to employ secret computing. Unfortunately, for many AI engineers, this is not a top-of-mind priority.

 

 

The focus of most AI software engineers is on the underlying AI capabilities, such as using sophisticated Machine Learning (ML) and Deep Learning techniques (DL). When the AI system is ready to be deployed, the AI developers are less concerned with what happens after the software is in use. The presumption is that whatever cybersecurity is currently in place in the execution environment will most likely be enough.

 

( Also Read: AI in Risk Management )

 

2. The ordinary AI developer usually wants to return to their AI bag-of-tricks and continue adjusting the AI-related parts of the system, or perhaps move on to a new project that demands their honed abilities at creating AI systems. 

 

3. Concerns regarding whether or not the current execution environment for their nascent AI system is extremely secure are not clearly addressed in their thinking, nor are they addressed in their standard toolkit.

 

4. The problem is that even the finest AI systems may be brought to their knees if cybersecurity isn't top-notch and all possible layers of defence aren't used. Until recently, many AI systems were not necessarily designed for domains with high risks and severe repercussions if the AI was compromised during execution. The concept of treating AI systems as purely experimental or prototypes is long gone now that AI is widespread across a wide range of applications.

 

5. Because the AI developer should be aware of what signals particularly sensitive flaws in their AI while it is running, they should investigate confidential computing as a potential countermeasure and determine whether this extra layer of protection is necessary.

 

Simply said, every AI developer worth their salt should be considering how their AI systems will be deployed, as well as the kind of hacks that may be conducted to disrupt AI system functioning. 

 

( Must Read: Advantages of AI in Cyber Security )

 

We're not claiming that it will always be a need; rather, when it comes to AI systems that are sensitive in nature and operate on the cloud, it's sensible, if not mandatory, to evaluate which of the various potential cybersecurity measures should be implemented. 

 

Hopefully, that will serve as a rallying cry for AI engineers who haven't yet considered the benefits of private computing. Some may be startled awake by the sound of trumpets.

 

 

How the Cloud is being used for the advent of AI-based real self-driving automobiles

 

 The usage of OTA (Over-The-Air) electronic communications capabilities is the most widely expected use of the cloud for self-driving automobiles.

 

OTA allows different patches and updates saved in the cloud for a fleet of self-driving cars to be automatically downloaded and deployed in each autonomous vehicle. 

 

It's convenient to be able to remotely roll out new features for the AI driving system or perhaps give bug patches, as well as prevent having to take the vehicles to a dealer or repair shop just to update the software. 

 

The OTA will also make it easier to send data from self-driving cars to the cloud offered by the fleet. Video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and other sensors will be included in self-driving cars' sensor suite. By gathering data from a complete fleet of self-driving cars and then conglomerating it on the cloud, the data they collect may be meaningfully evaluated.

 

( Also Read: Components of Intranet Security )

 

 

So, what does this have to do with confidential computing, you might ask?

 

Consider this: if there are programmes and data on the cloud that might be downloaded and put into AI driving systems, a cyber attacker has a convenient and stealthy approach to infiltrate their virus into the self-driving cars. 

 

The cybercrook just puts the evil-doing components in the cloud and then waits patiently for the OTA mechanism to do the job for him by disseminating it out into the fleet.

 

While most people imagine how an AI driving system might be compromised or degraded by someone physically entering the autonomous car, the threat from utilising the OTA is likely higher. The OTA's innocent beauty is that it's a secure way to send anything straight into the AI driving system, and it'll happen across a complete fleet of self-driving vehicles. 

 

Consider a fleet of hundreds, thousands, or even hundreds of thousands of self-driving cars, each of which relies on an OTA to get updates from a fleet cloud.

 

( Must Read: Economic Effects of Social Security )

 

So, we might want to pay more attention to what's going on in the fleet cloud. The more protection we implement, the less likely the OTA will become a doomsday apparition. It's possible that smart use of secret computing for the fleet cloud may reduce or at the very least make considerably more difficult the potential of launching a cyberattack against the fleet's AI driving systems.

 

Listen to this podcast on Confidential Computing in AI autonomous Vehicles

 

 

Summing Up

 

Another possible use for confidential computing is in the execution or processing of self-driving automobiles. When the AI driving system is run on the onboard computer processors, it must be extremely secure. 

 

The difficult tradeoff is that secret computing degrades processor speed, making it a more challenging decision when working with real-time systems. Keep in mind that the self-driving car's behaviours are controlled by real-time processing. Any significant lag in processing times might be a concern.

 

Self-driving vehicles are real-time machines that also happen to be dealing with life and death situations. An everyday cloud-based application does not usually pose the same life-or-death threat. 

 

This may be of little importance if the cloud processing is delayed in any way. Furthermore, because a cloud-based application is hosted in the cloud, you may easily add more processors or reallocate to faster processors accessible in the cloud.

 

The processors placed in a self-driving car are typically not as easily swapped out, as this may be a physically demanding and logistically costly task. Once automakers and self-driving tech companies have selected which processors to use in their self-driving cars, they're pretty much stuck. They'll have to hope that their decision will hold up for a while.

 

Overall, one useful takeaway from confidential computing is that we must be vigilant against all types of cyberattacks. What you don't want to do is put in place a sequence of securely secured steps and then forget about what will happen in the last mile or step.

Latest Comments