Charles’ Note: Remember that ridiculous sock puppet from the Pets.com commercials of the late 1990s?
If you were investing back then, it’s seared into your memory forever.
That infantile dog’s appearance in the January 2000 Super Bowl commercial – and its prompt bankruptcy just months later – became the unfortunate symbol of the excesses and absurdities of the dot-com bubble.
I don’t know that there are a lot of lessons to be learned from a sock puppet. But there are plenty of echoes of the dot-com bubble in today’s market… perhaps none as glaringly obvious as Global Crossing.
Early in the internet era, it became obvious that slow dial-up speeds were a major obstacle to growth. Rolling out high-speed internet would involve major capital spending on a previously unimaginable level.
Enter Global Crossing.
In 1997, former investment banker Gary Winnick led a group of entrepreneurs to launch the company. It’s mission: Crisscross the world with a massive global network of undersea fiber optic cables.
And it did!
By 2001, it had connected 200 major cities in 27 countries and laid an estimated 80,000 miles of cable.
But it wasn’t the only company doing this.
In the late 1990s, companies laid an estimated 80 to 90 million miles of fiber-optic cable. That’s enough to loop around the Earth about three and a half times… or get you a third of the way to the moon.
These companies went on the largest capital spending spree in history, terrified of being left behind.
We know what happened next…
In March 2000, the tech bubble burst. And in 2002, only about 5% of the cables that had been laid were in use. The remaining “dark fiber” went unused for the better part of a decade until demand for video streaming from Netflix and YouTube finally caught up with the surplus capacity.
Global Crossing – the poster child of tech infrastructure spending of the dot-com boom – filed for the fourth largest bankruptcy in history at the time.
So why the history lesson?
Because four “hyperscaler” companies – Amazon (AMZN), Microsoft (MSFT), Alphabet (GOOGL), and Meta (META) – are expected to spend about $400 billion in AI-related capital spending this year. Total spending on AI infrastructure over the next few years is expected to total in the trillions of dollars.
But here’s the thing…
The future of AI isn’t the umpteenth new version of ChatGPT running on a distant server farm. It’s localized AI, built into your car… or into your robotic housekeeper.
AI models will be getting smaller, not bigger.
So, what does this mean for our portfolios… and our investment in AI stocks?
I’ll let tech visionary Luke Lango tell you. As Luke explains it, we’re on the verge of a major shift in the AI investment landscape… one that has major implications for our portfolios.
Take it from here, Luke!
What’s Wrong With Wall Street’s AI Bet…
By Luke Lango, Senior Investment Analyst, InvestorPlace
Everyone’s watching the wrong AI boom.
While Wall Street and Silicon Valley obsess over ChatGPT-5 – or how many exaflops xAI is hoarding – they’re missing the real earthquake rumbling beneath the surface.
The foundations of the AI world are about to crack… reordering the entire semiconductor supply chain.
That quake?
The silent, seismic shift from Large Language Models (LLMs) to Small Language Models (SLMs).
This is not theoretical. It’s happening now.
AI is leaving the cloud… crawling off the server racks… and stepping into the physical world.
Welcome to the Age of Physical AI
If the past five years of AI were about massive brains in the cloud that could pass the bar exam and write poetry, the next five will be about billions of tiny, embedded brains powering real-world machines.
Cleaning your house. Running your car. Cooking your dinner. Whispering insights through your glasses.
This is AI going physical.
The thing is, physical AI can’t rely on 500-watt datacenter GPUs… or wait 300 milliseconds for a round trip to a hyperscaler…
It needs to be:
Always on
Instantaneous
Battery-powered
Offline-capable
Private
And cheap
That means it can’t run LLMs like Chat GPT-5.
It needs Small Language Models (SLMs): Compact, fine-tuned, hyper-efficient models built for mobile-class hardware.
SLMs aren’t backup singers to LLMs. In the world of edge AI, they’re the headliners.
The new AI revolution won’t be televised. It’ll be embedded. Everywhere.
The SLM Invasion Has Already Begun
You may not have noticed this SLM invasion yet. That’s because the companies deploying small language models aren’t bragging about billions of parameters or trillion-token training sets.
Instead, they’re shipping products.
Apple’s (AAPL) upgraded Siri? Runs on an on-device SLM.
Meta’s (META) Orion smart glasses? Powered by locally deployed SLMs.
Tesla’s (TSLA) Optimus robot? Almost certainly driven by an ensemble of SLMs trained on narrow tasks like folding laundry and opening doors.
This is not a niche trend.
It’s the beginning of the great decentralization of artificial intelligence – from monolithic, cloud-based computing models to lightweight, distributed intelligence at the edge.
If large language models were the mainframe era of AI, small language models are the smartphone revolution.
And just like in 2007, most incumbents don’t see the freight train coming.
To be clear: LLMs are remarkable – but they are not scalable.
You cannot put a 70-billion-parameter model in a toaster. You cannot run Chat GPT-5 on a drone.
SLMs, by contrast, are purpose-built for the edge. They:
Operate at sub-100 millisecond latency on mobile-class chips
Fit into just a few gigabytes of RAM
Deliver reliable performance for 90% of AI agent tasks (instruction following, tool use, commonsense reasoning)
Can be fine-tuned at low cost for narrow applications
They are not omniscient.
They are the blue-collar AI that gets the job done.
And in a world that needs AI agents in cars, robots, glasses, appliances, manufacturing lines, kiosks, and wearables – reliability and cost will beat generality and elegance every single time.
Now here is where it gets interesting…
The Investment Implications: GPU Utopia Cracks
For the past two years, the core AI investment thesis has been simple: “Buy Nvidia (NVDA) and anything tied to GPUs – because large language models are eating the world.”
If small language models begin to dominate AI deployment, that model will start to break down.
Why?
Because SLMs don’t need data centers. They don’t need $30,000 accelerators. They don’t consume 50 megawatts of cooling. They don’t even rely on OpenAI’s API.
All they need is efficient edge computing, a battery, and a purpose.
And that changes everything.
The center of gravity in AI shifts – from cloud-based GPUs and training infrastructure to edge silicon, local inference, and deployment tooling.
This does not mean Nvidia loses.
It means the next trillion dollars in value could accrue somewhere else.
The New Infrastructure Stack for Physical AI
Let’s get specific. The LLM world runs on one kind of infrastructure. The SLM world needs a completely different stack.
Critically, SLMs are inexpensive to replicate and don’t need constant API calls to function.
That is a direct threat to the rent-seeking software-as-a-service (SaaS) AI model… and a powerful tailwind for device original equipment manufacturers (OEMs) and edge computing firms.
And based on the chart above, you can start to see how this tectonic shift may play out across public markets.
Qualcomm (QCOM) looks like a major winner. Its Snapdragon AI platform already runs many SLMs. It’s the ARM of the edge AI world.
Lattice Semiconductor (LSCC) could also benefit. The company produces tiny FPGAs – ideal for AI logic in low-power robots and embedded sensors.
Ambarella (AMBA) is another potential standout, with its AI vision SoCs used in robotics, surveillance, and autonomous vehicles.
Among the Magnificent Seven, Apple appears especially well positioned. Its Neural Engine may be the most widely deployed small AI chip on the planet.
Vicor (VICR) also deserves mention. It produces power modules optimized for tight thermal and power envelopes—key to edge AI systems.
On the other side of the ledger, several beloved AI winners could find themselves on the wrong side of this transition.
Super Micro (SMCI) may be vulnerable if inference shifts away from data centers and server demand softens.
Arista Networks (ANET) could face pressure as data center networking becomes less critical.
Vertiv (VRT) might see growth flatten if hyperscale HVAC demand slows.
Generac (GNRC) may be exposed to declining demand for backup power if the SLM trend reduces reliance on centralized computing.
This is how paradigm shifts happen.
Not overnight – but faster than most incumbents expect… and with billions in capital rotation along the way.
Build a Portfolio for the SLM Age
If you believe – like we do – that AI is moving from “text prediction in the cloud” to physical intelligence in the world, then your portfolio needs to reflect that.
Instead of chasing the same three AI megacaps everyone owns, focus on:
Edge chipmakers
Embedded inference specialists
Optics and sensing providers
Power management innovators
Robotics component suppliers
The mega-cap GPU trade isn’t dead. But it’s not the only game in town anymore.
In short: SLMs unlock the era of “physical AI.” That includes everything from smart factories to warehouse bots to humanoid machines like Tesla’s Optimus.
Charles Lewis Sizemore is a market veteran of 20-plus years. He holds an MSc Finance and Accounting from the London School of Economics and a BBA in Finance from Texas Christian University in Fort Worth. He is a keen market observer, economist, investment analyst, and prolific writer, dedicated to helping people achieve financial freedom through smart investing.