The $1.8 Billion Question No One Is Asking
A solo founder built a $1.8B telehealth company with AI tools and one employee. The story is real. But the lesson most people are drawing from it might not be.
Matthew Gallagher launched a telehealth company from his house in LA with $20,000 and a dozen AI tools. Eighteen months later, Medvi is on track to do $1.8 billion in sales this year. His only employee is his brother. The story is real. The question worth sitting with is not whether you could do what he did. It is whether you should.
The Main Story: One Person, $1.8 Billion, and What AI Actually Proved
What happened: Gallagher built Medvi as a middleman for GLP-1 weight-loss drugs, using existing telehealth platforms to handle the doctors, pharmacies, and prescriptions. His job was the front end: marketing, branding, customer experience. He used AI for nearly everything else. ChatGPT and Claude wrote code. Midjourney and Runway generated ads. ElevenLabs handled customer service calls. He even built an AI clone of his own voice for personal scheduling. The result: $401 million in revenue in 2025, its first full year, at a 16.2% net margin. Hims and Hers, a public competitor with over 2,400 employees, posted 5.5%.
Why it matters: Sam Altman predicted in 2024 that AI would enable a solo founder to build a billion-dollar company, and Gallagher is the first real proof of concept. The leverage gap between a resourced team and a determined individual has closed further than most people have noticed. That is genuinely significant. The barrier that kept most people from building anything real has dropped to the floor.
The TWO angle: Here is what the other newsletters will not say: Gallagher’s story is also a story about hallucinated drug prices, fake doctor accounts running Facebook ads, and AI-generated before-and-after photos that were later scrubbed. The same AI stack that powered the growth powered the deception at the same scale. Proverbs 4:23 says, “Above all else, guard your heart, for everything you do flows from it.” The tools do not have a conscience. The operator does. What AI proved here is not that anyone can build a billion-dollar company. It proved that whoever you are, the tools will multiply it. The real question is not “could I do this?” It is “what am I becoming while I build?”
Today’s Movers
Anthropic cut off third-party agent platforms from Claude subscription plans. Tools like OpenClaw now require separate pay-as-you-go billing. Anthropic says agent usage generates nonstop requests that flat-rate pricing was never built to absorb. This is a real tension: the most valuable use cases for Claude are exactly the ones that make the subscription model unsustainable. For builders, the practical implication is clear. If your workflow depends on a third-party Claude harness, budget for API costs or reconsider your stack now rather than when the change hits your account.
Microsoft quietly buried this in Copilot’s terms of service: “Copilot is for entertainment purposes only.” The same product Microsoft is selling to enterprises at scale. A spokesperson called the language “legacy.” But the gap between what these companies sell and what they publicly admit about reliability is real, and it belongs in your mental model every time you hand a high-stakes task to an AI tool. The discernment required to use these tools well is not a soft skill. It is the skill.
Netflix open-sourced VOID, a framework that removes objects from video while rewriting the physics around the edit. Existing tools paint over objects. VOID reasons about what changes downstream when something is removed: a balloon floats when the holder disappears, blocks stay put when one in a chain is erased. Netflix preferred VOID’s results nearly two-thirds of the time against six baseline models including Runway. This is early, but it points toward video editing tools that work the way a skilled editor thinks, not just the way a content-aware fill works.
Anthropic acquired Coefficient Bio for roughly $400 million, folding the team into its healthcare and life sciences work focused on drug discovery. The timing, coming the same week as the Claude pricing change, suggests Anthropic is making deliberate bets on where sustainable revenue actually lives. Healthcare is one of the few domains where AI’s leverage is proportional to the stakes, and where regulatory requirements create a moat that pure software companies cannot easily replicate.
One Tool Worth Knowing
Granola for iPhone call notes. Most people know about AI notetakers for video calls. Fewer know you can run one on regular phone calls. Granola installs on your iPhone, listens in the background during any outbound call, and delivers a summary with action items after you hang up. It works on inbound calls too, though with your voice only. If you spend meaningful time on phone calls and leave them trying to reconstruct what was said, this is one of those tools that quietly removes a real friction point. Check your state’s recording consent laws before you set it up.
Pause and Consider
“Above all else, guard your heart, for everything you do flows from it.” — Proverbs 4:23
The companies selling you AI are also the ones telling you not to trust it. The builders using AI are the ones who will have to decide what trust actually requires. Gallagher’s story proved that AI multiplies whoever you already are — the ambition and the shortcuts alike, at the same scale. That is not a legal disclaimer. That is a character question. The tools do not have a conscience. The operator does. What you guard determines what you build.