Grace Venes-Escaffi (00:00): 

Are you ready to unlock the power of innovation? Become a member of the Consumer Technology Association and engage with a community of innovators. As a CTA member, you'll access cutting-edge research, connect with industry leaders, and shape industry standards, enjoy exclusive discounts, CES perks, and invaluable networking opportunities. Visit and be a part of advancing the technology industry. 

James Kotecki (00:31): 

This is CES Tech Talk. I'm James Kotecki. CES 2024 is January 9th through 12th in Las Vegas, and it's already time to build the hype, so let's get smart about the world's most influential tech event. And let's be honest, you can't talk about CES without talking about self-driving cars. In the past, on this podcast, we've covered autonomous race cars, but when will the autonomous revolution be speeding down your street? We're going to look at the state of the industry and the future of everyday driving with Nimrod Nehushtan. He's senior vice president strategy and business development at Mobileye, an autonomous vehicle technology company. Nimrod, welcome to CES Tech Talk. 

Nimrod Nehushtan (01:15): 

Welcome, James. Thank you very much for having m, I'm looking forward for this talk today. 

James Kotecki (01:21): 

And let's start by contextualizing for folks the piece of the autonomous vehicle challenge, or the pieces of the challenge, that Mobileye is actually working on. What's the easy way for the general CES audience to understand how Mobileye fits into the picture? 

Nimrod Nehushtan (01:37): 

Yeah, that's a very good question to start with. So essentially, we are developing a platform that can pretty much be integrated into any car, that will make this car fully autonomous. And this starts with a compute platform, so silicon and the different compute resources needed for the different technologies, as well as the perception stack, computer vision technologies, sensors, sensor fusion radars and LiDARs, mapping, through our crowdsourced mapping REM, our crowdsourced mapping activity, driving policy. And of course, also, we're developing active sensors, which will be a part of our future roadmap for next generation of autonomous vehicles. So you can pretty much say that we start from silicon design, all the way up to computer vision algorithms, sensor development, computer vision advancements, and mapping and driving policy. So pretty much the entire stack. 

James Kotecki (02:36): 

So when you're talking about any car becoming autonomous, just to be clear, are we talking about retrofitting older cars with all of this new equipment, or just newer cars? 

Nimrod Nehushtan (02:46): 

The idea is that, essentially, when you think about autonomous driving, you need to provide three major components in the system. There is the sensing layer, which is responsible for understanding the environment and the dynamic objects, road users, vehicles, pedestrians, cyclists, everything that is dynamic around the car. There is the mapping, which is traditionally responsible for understanding the road structure, the driving rules, lanes structure, traffic lights, everything that is static around the road itself, where the car is in. And then, based on top of these two elements, you have the driving policy, which is responsible for decision making and for the dos and don'ts of the autonomous vehicle. 


And what we are, in our company, developing and providing, is basically a platform that includes all these three components, as well as the silicon design and the compute that is necessary to run those in a car. And we are working with car companies to integrate this platform into new cars that will be launched in the future, so that we can offer hands-off, eyes-off, driving functions to consumers, through our partnerships with car companies. 

James Kotecki (03:59): 

I understand one of those partnerships is with Volkswagen, so can you maybe get a little bit more into the details of what this partnership means in practice there? 

Nimrod Nehushtan (04:06): 

Yeah, so in reality, I think we should go back a little bit further in time to look at the partnership with Volkswagen. We have been working, and Volkswagen has been one of our major partners for a few years now, and it started with what we call basic driving assist products, which is what everyone is familiar with today, automatic emergency braking for collision warning, lane departure warning, all these standard driving assist functions. And we have integrated our products, our EyeQ product, that provides all of these functions in many, many vehicles from Volkswagen Group over the years. And I think an important milestone in the relationship of the companies was, I think, six years ago, when we have jointly announced a partnership around crowdsourced mapping and REM, in which both companies collaborated in launching this innovative technology in the market, and it has been a very big success for both companies. 


And recently, we have announced a partnership with Porsche, to integrate our SuperVision product into Porsche vehicles in the next few years. SuperVision is our platform that provides hands-free driving, which means, basically, it's fully capable of all the driving tasks completely autonomously, but it's not a eyes-off system, which means that you are the driver, you will be allowed to take your hands off the wheel, but you cannot take your eyes off the road, and you still need to be engaged and alert of what's coming. But in terms of the system capabilities and the overall robustness of the system, it is very, very advanced in what it can offer to driving, and it will completely change the driving experience for us as drivers. 

James Kotecki (05:47): 

You've used these terms hands-off, eyes-off, a couple of times, so I want to get into this, because I think this is something that is specific to Mobileye. In other words, I think you're using those terms very deliberately, because my understanding is that, at the previous CES, CES 2023, Mobileye unveiled these terms as a way of making it much more understandable to people, in non-technical terms, what this technology does. It's interesting, even in this conversation so far, you're mentioning a bunch of advanced technologies, but you're also mentioning these simple terms. So tell me more about why Mobileye decided to do that, what those terms are, and how it's going for you in using those in the real world. 

Nimrod Nehushtan (06:23): 

Well, I think maybe a change in mindset that we've had is that we are thinking about what the system can and cannot do in the eyes of the consumer, and under the assumption that the average consumer is not extremely up to date with the latest regulation and standards that the industry is using, which is more phrased in an engineering language. And what we have found is that, of course, being involved in this activity for a few years now, is that the common language in the industry that is more engineering related got many different types of interpretations. And especially when it comes to the difference between level two plus, level three, level four, you can find varying opinions on what are the nuances that differentiate between this level and the other. 


And we wanted to keep it simple so that the broad audience can understand what this system is capable of. And this comes from a very, as you said, deliberate notion that we want it to be clear for consumers on what they can and cannot do when they will be using the system. And it means, for us, basically, we divide it to eyes-on, hands-on. This is a normal car today where the system is designed to help you, as a driver, to prevent dangerous events, to provide alerts and warnings and so on, but you still drive all the time. You're engaged and you're holding the wheel. 


The next step from there is what we call eyes-on, hands-off, which is our SuperVision system. In that case, as a driver, once you activate the system, it will drive completely autonomously, but as a driver, you need to still be engaged and be looking at the road, so that you're basically supervising and monitoring the system, and not physically driving all the time. This is very similar to how pilots are flying planes today. You have the autopilot, it's driving, or it's flying the plane, but as a pilot, you still need to monitor it and see that everything is functioning properly. And this is the eyes-on, hands-off SuperVision system. 


From that point, we moved to eyes-off, hands-off. And for us, once we say eyes-off, you need to really assume that once you allow the driver to take his or her eyes off the road, the system needs to be extremely robust and to be able to handle cases where maybe the driver is not alert at one second notice, or even ten second notice, in some cases. So it's not enough just to allow the driver to take their eyes off the road, you need to also design the system to handle cases where maybe the driver is not there to take control, or is not responsive enough, which is what we call minimum risk maneuvers, and reaching a safe stop, and so on. So for us, again, it's very clear, you as a driver, you know, okay, once the system is active, I can take my eyes off the road and be minding my own business, reading my phone and doing mails, whatever it may be. 


And then, the final step is no driver, this is like a robo taxi where there's no driver. And this is the complete spectrum of products, and how you can move from eyes-on, hands-on, which is the common case today in most cars, and all the way up to no driver. 

James Kotecki (09:37): 

I recall a debate in the industry about whether and how much assistance to give folks in that middle section between I'm fully driving the car myself and there's no driver. And I remember the debate going something along the lines of, on the one hand, you had people who said, "We need to gradually ramp this up and give people more and more assistance, how the technology is going to progress." And then, on the other hand, you had folks who said, "We have to go from, I'm driving the car completely myself, all the way to no driver, it almost has to be a switch." Because people in that interim period where, let's say they're hands-off, eyes-on, for example, to use your terminology, they are going to drift, they're not going to be alert, they're going to forget that they have to take control, they're going to be lulled, in other words, into a false sense of security almost, and in some ways, that is more dangerous. 


So what's the state of that debate today? I think it's somewhat clear what your answer for that is, but why are we going in this direction? 

Nimrod Nehushtan (10:32): 

Well, I think this debate is very much real today, and it's present in many debates in the industry. And there has been numerous events, there has been public events, in which specific behaviors of different systems led to this discussion even becoming more and more relevant, as to whether it's clear enough or not to drivers on what they can and cannot do when the system is active. 

James Kotecki (10:32): 


Nimrod Nehushtan (10:59): 

And I think this is an extremely important topic, and the more these systems become autonomous, I think first of all, it needs to be clear to the driver as to the dos and don'ts once the system is active. And in addition to that, a driver monitoring system is a very, very important piece of the system, so by having a technology that can inspect and monitor the driver, you can pretty much anticipate events in which drivers will maybe abuse the system or will maybe simply fall out of concentration. 


So we have technologies like this today, and these are very, very important pieces of the entire system, because you can see if somebody's getting drowsy, if somebody's getting unconcentrated, if somebody's sneak peeking into his phone, even if he's not supposed to do that, and you can disengage and you can honk the horns or whatever it may be, and at least mitigate this risk. So it's not enough just to say so, at least in our view, it's not enough just to say so in the terms and conditions when you buy the car, you need to also have an active system that can continuously inspect the driver and make sure that the proper use of the system is being made. 

James Kotecki (12:15): 

Right. It seems like there's the, "Well, I told him not to do it, and he did it, but he should have known better." But of course, the consequences of that can be significant. 


I also wonder if the necessity of the approach that you are outlining, which sounds like you're gradually ramping up the amount of assistance that you're giving a driver, up until the point that you can just take the driver out of the seat completely, I wonder if that's necessitated by the fact, and you tell me if I'm wrong, that fully autonomous, driverless, no driver, that stage is just very hard to achieve on a practical level. For years, we've seen videos of cars with no drivers, driving around in certain, maybe with a bit of marketing polish on it, but it seems to me that this dream of not having a driver in the car, it seems to just be a lot harder to have achieved than maybe people would've thought five or 10 years ago. Is that a fair assessment of where things are in the industry today? 

Nimrod Nehushtan (13:09): 

Well, I think it's pretty safe to say that there is definitely some level of realization that maybe some of the challenges that were previously believed to be simpler, wanted to be more complicated. But I don't think necessarily the question of when will we have driverless cars everywhere is a very tricky one, and you can have an argument one way or the other. I think a more interesting debate is, what will be the meaningful milestones in the path towards that end state? 


And I think what might be understated in the discussion is that there are a few major milestones in the path towards driverless cars that will completely change how we are consuming transportation and how we are experiencing the car. And these are much more imminent, let's say, and much more realistic today, although we can argue that it's not necessarily decades away. But regardless of the question of when driverless cars will be as prevalent as they were previously thought to be, I think that in the next three to five years, we see major leaps in what commercially available system will offer to drivers, and talking about the hands-off applications that will provide new levels of comfort and much less anxiety, and we also believe better safety overall to drivers. 


And then, conditional autonomy, which is practically eyes-off, but in limited conditions, can be extremely useful. So imagine a scenario where you're entering your car and you're selecting a destination in the navigation system, and then once you start driving, you start driving hands-off. So the system takes you from your house and drives the small residential streets fully autonomously and you simply inspect it. And then, once you enter the highway, it enters a new stage which is eyes-off. You have a warning that says, "Okay, now you can take your eyes off as well." And then, all the way, once you're on the highway, including all the different use cases and scenarios, like overtaking a truck and obstacles and block planes and so on, it's fully autonomous in a way that you can be eyes-off. And then, once you approach the end of the highway, a few moments before that, there is a procedure in which the control is being transferred to the driver again, and then you continue with your journey. 


That alone, that segment of just autonomous driving in a highway, eyes-off, this is extremely useful and it's a complete game changer. Thinking about the amount of time we're wasting today, driving on highways in traffic jams and so on, it's only getting worse, thinking about the amount of accidents that we can prevent in high speeds, which is almost always fatal. So the overall impact to society, even if we only provide this level of autonomy, is night and day compared to today. And this is within reach, we are actively working, we're busy solving the problems towards that, towards this product, and we see this launching within the next three to five years. So we don't think of this as a far future type of product. And just imagining having this kind of a product available in the market, I think, is a real game changer. 

James Kotecki (16:38): 

And do you think we'll be able to see those milestones show up in the derivative statistics there, in terms of productivity statistics, you mentioned accident statistics? Is there a data point milestone on something like that, that you're looking for to say, "Okay, this is a sign that these gains that you're talking about have actually arrived and been locked in"? 

Nimrod Nehushtan (17:01): 

Yeah, I think there are, and I think the most obvious ones will be the amount of time people are wasting and driving on average. 

James Kotecki (17:09): 


Nimrod Nehushtan (17:10): 

This can be reduced. If you think about the common journey or the common commute, there is a significant portion of that that is on highways. And also, a lot of the traffic jams are concentrated in highways, and also just long parts of the journey are in highways. This is how the road network is built. And I think that in terms of the amount of time people are wasting, not just sitting in the car, but wasting time while you're in the car, will be reduced. 


And also, safety. We're talking about, at least in our view, the threshold to launch these products is to be orders of magnitude better than humans, statistically. In terms of the probability of a fatal accident or any type of accident, these systems are not going to be as good as humans to be accepted by society. They need to be orders of magnitude better. So the overall impact to society, the more these systems will become prevalent, the more you will see a drop in the amount of fatal accidents. And I think, as we all know, accidents and car crashes today are one of the leading factors for early fatalities. 

James Kotecki (18:24): 


Nimrod Nehushtan (18:24): 

So there is no way to reduce that to zero without taking the driver out of the equation. And the more the systems will become autonomous, and the more drivers will be out of the equation for a longer and longer portion of the drive, the more you will see this number of fatal accidents drop dramatically. I think a good example could be aviation. If you compare today the statistics of being involved in a fatal plane crash compared to 50 years ago, it's orders to magnitude different. And one of the biggest revolution that happened in aviation is the level of automation that entered. 50 years ago,, you could be boarding a plane and the pilot will say, "Listen, guys, I'm lost. I don't know where we are. I hope we have enough gas." So the more automation and the more sophistication and compute entered planes, the room for error of a pilot was mitigated. 

James Kotecki (19:23): 

Another data point we could look at, as this technology increases in terms of prevalence and adoption, is people's attitudes about using it. And so, do you look at data for that, for people's willingness to take their hands and eyes off the road? Is that changing, and what do you see as far as the trends there? 

Nimrod Nehushtan (19:45): 

I think that's a very good question. I've been personally using our cars on a day-to-day basis, and once you get used to it, and it takes a relatively short amount of time to get used to it, it's pretty hard to imagine going back to driving normally again, driving manually again. And I think there is a level of trust that needs to be gained between the user and the system, that after a few hours of driving, you're experiencing the system in different scenarios and you understand how to engage with it, you develop a certain level of trust, and then you get used to the comfort. 


Now, today, again, I'll give an example, think about cruise control. This is so standard today that it's available in every car, but look back 15 years ago, not everyone treated it lightly when it came to counting on the car to keep the gas and brake pedals as they are. Today, you don't even think about this. And when we think about enabling these systems, it will be the same process. After a few hours of driving it and seeing the performance, seeing the limitations and strengths of this, you start to understand that, in many ways, this system is actually better than you, and you would actually prefer the system to be driving, in certain scenarios, in pretty much most scenarios, once you get used to it and you see it working in the field. And it's not a theory, this is in practice what happens with people that are starting to dip their toes into this. 

James Kotecki (21:19): 


Nimrod Nehushtan (21:20): 

Because we, as humans, we are very, very bad in driving in mundane scenarios, like driving on a long highway for a few hours, maybe we get exhausted, we get unconcentrated. And you see that the system excels in these situations, and it's simply easy for you to let go, because you see how much more reliable the system is compared to you, driving back from work, being tired and have a two-hour commute. 

James Kotecki (21:51): 


Nimrod Nehushtan (21:51): 

It's pretty easy, because the value that you get from the system is so high, and after a few hours of gaining trust, it's a no-brainer. 

James Kotecki (21:59): 

It's one thing to get used to it for your own car, and I think maybe societal acceptance is maybe a separate question, I'm not sure if you agree with that, but I just wonder, because you mentioned the idea of having to be, or wanting to be, orders of magnitude better than a human driver just to launch your technology. Now, if you just looked at it from a purely mathematical point of view, if you were just one percentage point better than a human driver, that would be better, from just a raw mathematical perspective. But of course, there's this idea, this perception, and you mentioned some of these cases made the news, if the robot car crashes and someone is killed, then that's a headline news and people maybe have this mistrust over that, versus there's obviously day-to-day crashes that kill many people that humans are involved in and that doesn't make the news. So how does human perception play into this need to be so much better than human drivers? 

Nimrod Nehushtan (22:52): 

Yeah, I think this is a very important question. One of our fundamentals in our autonomous vehicle stack, let's say, in our offering, is that we believe that when it comes to the tolerance for errors is different for different circumstances. I'll explain. We think that it's much more likely that nobody will tolerate errors because of wrong decision-making. So for example, as a human, if somebody decides to run over somebody else, by actively deciding to do so on purpose, it's a different felony than, if by mistake, something unexpected happened and you could not do anything better, but it still led to an accident. So when it comes to errors because of wrong decision-making, this will not be tolerated. 


And I think maybe, unlike most of other companies that are trying to, or working on, autonomous vehicles, we believe that decision-making, the driving policy which is responsible for the dos and don'ts of the system, this needs to be a very transparent and easy to understand element. It cannot be a black box and a heavy AI driven machine, because it's very hard to understand why AI system decided to do what it decided to do at some point in time. It's very hard to debug it, and to read into the code, and to understand the policy of it. If you have a transparent model, which our driving policy is based on our proprietary responsibility sensitive safety concept, RSS, we have published papers, white papers on this, you can read it online, you can understand exactly the rules in which we are coding our system to behave. And also, we can easily understand why the system decided to do what it decided to do at any point in time. And therefore, we can prove that there could not be any wrong decision making that will lead to dangerous events because of the system. 


And this is a critical element to gain the trust of the general society, and we're actively working on exposing this concept and creating a consensus that these elements needs to be transparent, and maybe there is a call for a common standard in the industry around that. So at least you can say, "This is how the system is going to do what it decided to do. This is the rules of the game." It's not an unexpected statistic approach that says, "We hope we learned enough data. We hope we used enough data to train our AI system, and we hope it's statistically good enough." We think that when it comes to decision-making, there is no room for hope, it needs to be definitive, and this is a core element in our system. 


And when it comes to other elements, I think the more the systems will be prevalent, the more we can see that overall, the benefit to society is so dramatic and the type of errors are going to be explainable and understandable. Again, taking back an example from aviation, there is a certain probability that the wings will fall off the plane. If it happens, and you read about it in the news, it's a tragedy, of course, but all of us know, it can happen in some probability. But if you read in the news that the pilot decided to crash the plane on purpose, it's a completely different story. So this is our approach, and this is the fundamentals and guidelines in our design of the system. 

James Kotecki (26:27): 

And of course, the probability of that example, the wings falling off the plane, is obviously vanishingly small, so we accept it as a society, and it would definitely be surprising if that were to happen. You mentioned industry standards. The next level up from that is regulatory and legal standards. Are there certain things, I believe you're coming to us from Israel right now, is that right? 

Nimrod Nehushtan (26:49): 


James Kotecki (26:50): 

So I'm here in the United States, but all around the world, lawmakers and regulators are going to have to be grappling with this. Where do you see folks who are leading on this issue? Do you see the regulatory and legal environment shaping up to where this stuff can be adopted on a mass scale and there is that standardization? 

Nimrod Nehushtan (27:13): 

I think there is definitely a discussion in the industry about this, about the need for regulation and standards, and we're closely watching how this progresses, and we also think it's going to be a key in future launches of these products in large scale. And again, we're already seeing this, the [inaudible 00:27:33] collaboration and ongoing engagements in the market, in order to promote this, to create some certainty, and to create some standards that can then be used by technology companies, by whoever it may be in the industry, car companies, service operators, to at least understand what will be the criteria to launch these products in a way that has some level of assurance behind it. And this will be very important to balance the risk and the dilemmas that maybe some companies are facing today because of the certainty around it. 

James Kotecki (28:08): 

Is there a regulatory track or a legislative philosophy that you are especially wary of? In other words, is there some politician out there saying, "We have to just ban all self-driving cars"? I haven't heard of that. It seems like we're debating philosophically within a range of possibilities, all of which would eventually lead us toward a more autonomous future, but you probably have a better sense of this than me. 

Nimrod Nehushtan (28:32): 

Now, I'm not aware of any deliberate ongoing engagement in high profile around completely banning autonomous vehicles altogether. I think looking at the long-term, we have to have these technologies and we have to have these products. This can be a dramatic positive impact to society, and the industry is investing in changing the way we are consuming transportation, and regulation needs to also join the party and help in creating the standards and certainty to make it a commercially available proposition. So I think that we're not aware of any discussion that tries to pull the ship in the opposite direction, at this point. 

James Kotecki (29:22): 

So as you mentioned, your company is based in Israel. We're looking forward to having you join us in Las Vegas for CES 2024. You're going to have to come halfway around the world for that. Why do you come to CES and what are you planning for CES 2024? 

Nimrod Nehushtan (29:34): 

So first of all, CES is an amazing platform for us to not only see our colleagues in the industry, but also get up to speed with the latest technologies, meet startups, meet entrepreneurs, and see the different innovations coming in many, many different type of markets. We are always excited about the opportunity to meet our customers and counterparts, and it's very, very effective for us, so it's a no-brainer for us to come halfway around the world for CES, and we plan to do that again in 2024. This year, we will, again, show the advancements in our technology, latest status of the business, share our vision and progression in the autonomous vehicle journey. So a lot to look forward to, and we are very excited about 2024 this year. 

James Kotecki (30:23): 

Thank you so much for joining us. Nimrod Nehushtan, Mobileye, really appreciate you being here on CES Tech Talk. 

Nimrod Nehushtan (30:31): 

Thank you very much, James, for having me. 

James Kotecki (30:33): 

And thank you so much for listening and/or watching. That's our show for now, but there's always more tech to talk about, so please subscribe to this podcast so you don't miss a moment. You can get even more CES and prepare for Vegas at That's Our show today is produced by Nicole Vidovich and Mason Manuel, recorded by Andrew Lynn and edited by Third Spoon. I'm James Kotecki, talking tech on CES Tech Talk.