Editor’s Note: Tesla’s “Robotaxi” event tomorrow will mark the beginning of a new era for autonomous transportation. It’s one that will have far-reaching implications across the economy.
By making autonomous vehicles a reality, Tesla is opening the door to tectonic disruption – and ushering in vast new opportunities. Tech investing expert Luke Lango believes he’s spotted one of them. My colleague from InvestorPlace shared the details in an urgent briefing video on Monday. You can watch the replay here.
Tomorrow’s AV technologies are on the verge of smashing into today, and the shift will be like nothing we’ve seen before. So, I’ve invited Luke here today to tell you more about those technologies.
– Charles
The Two Types of Self-Driving Tech Every Investor Should Know About
My friends with kids in college now tell me their children pretty much grew up thinking they’d never have to drive due to self-driving cars being “five years away.”
My buddies with kids in high school now tell me the same thing.
The folks I know with middle school students, however… those kids have always known they would have to sign up for driver’s ed.
Silicon Valley can only tell us that self-driving cars are “five years away from being five years away” for so long before we doubt their story.
Still, I’ve been bullish on autonomous vehicles for a while now. The industry’s developments, after all, have been promising.
And here’s the thing… Maybe those kids in middle school now won’t have to learn to drive.
Those self-driving tech developments are rapidly moving from promising to reality.
You probably know about the rapid expansion of autonomous ride-hailing services in Phoenix and San Francisco. But did you know that Waymo now offers driverless rides to folks in Austin as well?
You may have heard about the rollout of autonomous trucking in Texas and Arizona. But did you know that California Governor Gavin Newsom vetoed a bill that would have banned driverless trucks from operating on his state’s roads last week?
And with the upcoming launch of Elon Musk’s robotaxi tomorrow – Thursday, October 10 – I believe the stage is set for self-driving cars to begin transforming the $11 trillion transportation services industry.
(Watch the replay of my urgent broadcast now. In it, I show you how to position your portfolio to profit from it.)
Now, that’s all great information to know. But it doesn’t mean much if we don’t grasp how these vehicles actually work.
After all, understanding the tech behind a burgeoning megatrend is key… before we can start profiting from it.
Therefore, to potentially turn the “Age of AVs” into a massive payday, we must first understand how self-driving cars work…
A Sensor Trifecta
At its core, a self-driving car is operated by a combination of sensors – the “hardware stack” – and AI-powered software – the “software stack.”
The car’s sensors gather information about its surroundings. Then, the AI software processes that data to determine whether the vehicle should accelerate, brake, change lanes, turn, etc.
And this all needs to happen virtually instantaneously.
Usually, the hardware stack is composed of three sensors: cameras, radar, and LiDAR. A typical self-driving car uses all three sensors as each has strengths and weaknesses that complement the others nicely.
Let’s go through them one by one…
- Cameras collect visual data. They capture high-resolution images of the vehicle’s environment, similar to how our own eyes work. These cameras recognize various signs, lane markings, and traffic lights – and can distinguish between different objects, like pedestrians, cyclists, and vehicles. They excel at providing detailed visual information, which helps the car understand the context of its surroundings. But they tend to perform poorly in bad visual environments, like when there’s low light or inclement weather.
- An AV’s radar sensors emit radio waves that bounce off objects and return to the sensor, providing information about the distance, speed, and movement of obstacles in the car’s vicinity. These sensors work well in all weather conditions (complementing cameras nicely), but they provide limited resolution and detail (where cameras excel).
- LiDAR – which stands for light detection and ranging – is essentially radar powered by lasers. These sensors emit laser pulses that bounce off surrounding objects and return to the sensor. By measuring the time it takes for the light to return, lidar can create a high-resolution 3D map of the vehicle’s environment. This provides accurate depth perception, enabling the car to understand the exact shape, size, and distance of surrounding objects. However, lidar doesn’t capture color or texture information (like cameras do).
In other words, AVs use cameras to see things. Radar senses how fast those things are moving. And LiDAR helps calculate the exact position of those things.
Next-Gen Software in the Driver’s Seat
Self-driving cars use what is called “sensor fusion” to combine camera, radar, and LiDAR data, creating a complete, accurate, and reliable model of their environment.
For example, if a person crosses the road in front of an AV:
- The camera identifies it as a person.
- The radar tracks the pedestrian’s speed to predict potential collisions.
- The LiDAR measures the pedestrian’s exact distance, shape, and movement.
Together, these sensors allow the car to make informed decisions, such as slowing down, stopping, or rerouting, ensuring safe and efficient navigation.
But it can only make those decisions with the help of its software stack.
An AV uses a variety of software and methods to provide real-time intelligence about its surroundings. There are essentially five components to this software stack: perception, localization, prediction, planning, and control.
- The perception software uses sensor fusion, object classification, and semantic segmentation to create a comprehensive picture of a car’s environment.
- The localization software uses highly detailed maps and location data to place the car precisely in its environment.
- The prediction software leverages machine learning models to predict how things in the environment may act in different scenarios.
- The planning software takes the outcomes of the perception, localization, and prediction software to decide an optimal path for the car.
- The control software executes the planned action, controlling the car’s steering, acceleration, braking, etc.
Together, these hardware and software stacks create the technological background for self-driving cars.
The Final Word
Of course, every company attacks the self-driving problem differently. But this is the general framework most follow.
As such, when looking to invest in the autonomous vehicle supply chain, it makes sense to look for stocks providing AVs’ critical components.
Find the strongest camera, radar, and lidar providers. Focus on the most promising software plays.
They’ll likely be the biggest winners in the Age of AVs.
In fact, if you’re hoping to get positioned for an era of AV-powered market gains, click here to watch the replay of my recent special video briefing. I talk all about the quickly unfolding Age of AVs. And the investments to make.
This briefing is all about getting you prepared for Tesla’s Robotaxi launch tomorrow (which we expect will be huge). Though it’s about much more than that upcoming debut.
Indeed, in this replay, I detail all the recent groundbreaking developments in the autonomous vehicle industry, including how robotaxis are set to completely transform transportation, save millions of lives, and potentially put up to $30,000 a year in passive income in your pocket.
Plus, I’ll show you how to get my playbook on the best AV stocks to buy right now.
Click here watch the replay now.
Luke Lango