News: Q&A with Senior Director Danny Shapiro: How Nvidia Is Working with Over 225 Different Driverless Customers

Q&A with Senior Director Danny Shapiro: How Nvidia Is Working with Over 225 Different Driverless Customers

Nvidia's decades-long development of graphics processor units (GPU) for PCs has given it a major leg up in the driverless space.

Lead by charismatic founder and CEO Jensen Huang, Nvidia has recently extended its engineering knowhow to add neural networks for machine to its GPUs used to pilot self-drive cars. Nvidia's driverless group may represent less than 10% of its total revenues, but it has also secured design wins with virtually every OEM, supplier, or research team seeking to develop driverless systems. As it turns out, there are over 200 customers that use Nvidia's GPU designs or knowhow for the development of self-drive vehicles, the company says.

Driverless was able to speak with Danny Shapiro, senior director of automotive for Nvidia, about what it is like working with hundreds of customers vying for a place in a sector that is expected to disrupt transportation on a seismic scale.

Here are some excerpts from our recent conversation with Shapiro.

Driverless: After designing GPUs mainly for graphically intensive PC and industrial applications for decades, how and why did Nvidia decide to develop deep machine learning and neural networking algorithms for driverless vehicles? Was there a point of crystallization when that came about?

Danny Shapiro: A couple of things are involved. One is the role that we've traditionally played in the auto industry and how artificial intelligence (AI) has really come to be one of the hottest new computing segments.

On the automotive front, Nvidia has been working with car and truck companies for two decades. The first phase consisted of automotive companies just using our GPUs for traditional auto graphics and design, CAD designs, and other applications.

Then, about five years ago, we became active in research and development with academia and other researchers, including Andrew Ng (then a chief scientist for Google), who were modern pioneers of AI and deep learning. And essentially, they started using GPUs that are these massively parallel processors that can do the kind of math that's required of neural networks and other computationally-intensive algorithms to create a new way of understanding data patterns.

So it was a combination of outside research and some work going on in Nvidia. The GPU was not designed for that, but with the invention of our CUDA device 10 years ago we recognized there were many applications for parallel processing outside of graphics so that's how we created this general-purpose GPU to really enable deep learning.

We've since done a lot on both the hardware side by evolving our GPUs specifically for different types of instructions by, among other things, increasing the memory bandwidth. We also designed a lot of libraries and other tools specifically for deep learning so both the hardware and software have really transformed the GPU to become this engine for artificial intelligence, in addition to graphics.

D: What is biggest research challenge Nvidia faces for driverless applications?

DS: From an autonomous vehicle perspective, there's a whole pipeline of computation that needs to happen in the car. And it needs to happen in a second, This isn't deep learning for Facebook tagging where there can be a some latency and it doesn't have to be 100% accurate.

There are many phases, from detection to localization to path planning. We can apply AI throughout that whole pipeline but we have to ensure ensure the safety, accuracy, and reliability of the system. And that comes about through training, which requires a lot of data.

D: "Training" applies to training the system for machine learning?

DS: We train the neural network. That happens separately from when the car is driving, involving many different neural nets. You don't explicitly train the car's neural networks, but the car watches the human driver. There are a combination of many different neural nets trained to recognize lane markings, pedestrians, and objects, such as signs. They are coupled with systems that are more behavioral focused that can understand and recognize potential drunk drivers or road rage, for example. They are able to drive much more safely than any human could.

D: How many neural networks are there in a working Level 4 car prototype?

DS: There could be dozens. What we've developed is a supercomputer in the car. We offer different flavors of it depending on what the customer is trying to do. And we've created a lot of neural networks and all the frameworks that are supported.

It's kind of similar to the creation of video games. We don't write the video games but game developers use our libraries, rendering and physics engines, and things like that. So there's going to be neural networks we developed, neural networks our customers have created in their labs, or ones built upon ours. There are many different types of detection for localization for path planning and even things inside the car, such as driver monitoring to be able to see if the driver is paying attention or is looking down at their phone.

What is really exciting is that these cars will get better and better over time. And you'll get updates just like you get on your phone with new features. We see software for the car always getting better and evolving.

D: Nvidia has made many design-win announcements, including recently formed partnerships with Volkswagen and Volvo Cars, and German suppliers ZF or Hella, in addition to a very long customers list that has included Audi and Tesla. How much does what Nvidia offer vary from customer to customer?

DS: As much as we would love to have a stock product that we create and sell to everybody, that's not how the auto industry works. I don't think I can think of any industry where you have so many large players but nobody has a 50% share like in the cellphone space.

So that creates a lot of competition. And a lot of opportunities for differentiation. Each company has different ideas of what they want to build and have different road maps of how they're going to evolve for Level 3 to Level 5. Each solves the problem differently, with different cameras, radars, LiDARs and capabilities they plan to introduce.

That's why we're seeing such great adoption of the platform that we've created because it does scale. It is flexible in terms of types of peripherals and it's open so that they can develop their own code or use ours. So like you noted there's this whole spectrum of some companies leveraging a lot of our softwares. Some companies are building their own software or are using parts of our software. So there's no set answer, and so we basically have a custom engagement with each of these partners.

Now at the tier-one supplier level, of course, we're working with them to develop more of a turnkey solution that they can offer their OEM customers. So that's kind of the thing we do with Bosch or ZF. There, you can imagine there's still going to be customization.

For the deals we've announced with Audi, Toyota, and Volvo, Nvidia is working directly with the OEMs but there's still always going to be a tier-one supplier involved that provides sensors and helps to bundle everything. But Nvidia is also developing a lot of software as a base platform and there may be some customizations depending on the contracts with each of the different customers.

D: So at the end of the day, each customer makes a separate decision about what they are going to offer and what Nvidia will do?

DS: What we see is virtually every car maker is using Nvidia in their development effort and there are different points along the spectrum in how close they are to production from Tesla, which is already in production, to the companies that we've already discussed that are looking at a 2019-2021 time frame.

In many cases, some of these guys have a PC with an Nvidia GeForce GPU that's running the neural maps. They might be at an early stage in their development, so you open up the trunk and see a bunch of PCs.

D: That's been my observation as well.

DS: We also continue to increase the performance of our platform because we see our customers requiring higher-resolution cameras and more sensors, and the bandwidth needs to increase. The GPU processing horsepower thus also needs to increase to deliver the robust system that is capable of true driverless.

D: This reminds me of the PC gaming industry when new 3D titles required more GPU processing power to run well.

DS: I think the reality is, in the computer industry in general, you never have enough computing horsepower. You always continue to increase the complexity of the software to take advantage of whatever computing hardware you have. And again, this will be a problem for driverless vehicles as we see new sensor technologies in the future. There's a lot of R&D going on in that area because we just we want to be able to create this virtual understanding, or a model inside the car of the outside world.

D: So the car becomes a Hal from 2001: A Space Odyssey

DS: (laughs) Well I don't know if that's exactly how I would describe it.

D: A good Hal or a Hal that's nice. So you mentioned Nvidia is part of the development of virtually all future driverless car systems on some level. So what are you not into?

DS: We've noticed our computing platform Drive PX 2 started shifting just a little over a year ago. It was May of 2016. So we now have over 225 different companies or institutions that are using it in their development. That includes carmakers, truck makers, tier-one mapping companies, sensor suppliers, and many startups.

And you're absolutely right that everyone's recognized there's no way that you can just write code to handle all situations, while AI is a fundamental solution that will let the car handle the ambiguity and the randomness that it will experience on the road. The fact is, it's really taken the industry by storm. AI is the key that is going to unlock the driverless future.

D: Going back to software development, how does machine learning benefit from software updates?

DS: The updates back to the car happen periodically when the software has never been tested. But the uploads can happen on a daily basis.

Tesla, for example, has an installed base in every vehicle. It has eight cameras, radar, and other sensors. If somebody pays for Autopilot or not, the system still can collect data and run the software in the background, even if it's not actually driving the car. It can watch what the driver is doing as it runs in the shadow mode. There is this communication that goes back-and-forth. It is uploaded to the cloud and then communicated back to the entire fleet.

D: What kind of data is being uploaded?

DS: A lot of different kinds of data is being uploaded. Without being OEM-specific, the sensors can detect detect, for example, potholes on the road or different weather and traffic conditions. Or anything vehicle related. Tesla, for example, knows where its fleet of cars is just like Apple knows where your iPhone is.

D: Mobileye, with Intel, says it offers a complete computer hardware and sensor package for driverless cars. Would you describe Mobileye and Intel as your competitors in this space?

DS: I'm not familiar with the details of what some of these other companies have, but I know that it's very different from Nvidia in terms of the openness, the scalability, the programmability, and just the unified nature.

Just updated your iPhone to iOS 18? You'll find a ton of hot new features for some of your most-used Apple apps. Dive in and see for yourself:

Cover image via Nvidia

Be the First to Comment

Share Your Thoughts

  • Hot
  • Latest