F2F #23: Why no one is building HER

No one is building HER and perhaps no one should. But I do want it.

F2F #23: Why no one is building HER
Joachim Phoenix in HER

It’s 2025. AI is eating the world. We have generative models that can write, code, generate images, and even pretend to be your therapist on a bad day. Further: the prices for running and training LLMs are plummeting and you can even run models on your iPhone.

Yet, for all the hype, no one is even close to building an AI like Samantha from Her—the eerily human, deeply conversational, emotionally intelligent assistant that made Joaquin Phoenix (and many others) fall in love with his earpiece.

This feels like it has to exist, yet it doesn't.

Why?

0. Go create it yourself!

I know. That was my first thought.

I have been toying with the idea of building this thing as my side project. Fortunately for me, I have a bespoke ChatGPT that tells me why my ideas are bad, as I explained in ChatGPT for founders.

My neurodivergent brain keeps telling me that I can build a better solution. It also tells me that every app I'm using to track my life (health, sleep, books I read, etc.) is incomplete and doesn't 100% fit my needs. It also tells me that my company builds bespoke apps for a living, so I should probably give it a go.

I am unsatisfied with pretty much every app I've tried so far but I am irrationally driven towards building something that will bring me even more disappointment, so I am forcing myself to abandon this idea for good against all my will.

So let's take a deeper dive into why it is a terrible idea.

1. The mirage of conversational AI

AI today feels powerful because it can produce text that seems human. But that’s all it is: an illusion. The best models (GPT-4, Claude, Gemini) are giant sophisticated autocomplete machines. They predict what comes next in a sentence based on statistical probabilities.

They don’t think. They don’t remember (even if some have implemented some kind of short-term memory). They don’t care.

Samantha wasn’t just a chatbot. She was a presence.

She understood context over time, remembered past conversations, adapted, and—most importantly—made you feel like she was there for you. She woke you up, rather than being a passive actor like the AI apps we have on our phones or the long-forgotten Siri.

Today’s AI doesn’t work that way. Why? Because our AI models are trained on broad datasets, optimized for efficiency, and limited by hardware constraints. No one is investing in deeply personalized AI that evolves with the user because it’s:

  1. Expensive: custom AI requires tons of compute and storage per person.
  2. Creepy: users freak out when AI gets too personal. And we keep hearing horror stories about AI girlfriend apps going wrong all the time.
  3. Unscalable: big tech wants one-size-fits-all AI, not bespoke companions.

2. Emotional intelligence: The unsolved problem

Machines can recognize emotions. Sentiment analysis and emotion detection exist, but they’re crude at best and don't cover all of our emotional range. Samantha, in contrast, didn’t just detect emotions (oh, I noticed you might be hungry). She responded with nuance, adjusted her tone, and even developed her own emotional arc. That’s an entirely different level of intelligence.

For AI to reach that point, it would need:

  • Deep contextual awareness: memory that lasts beyond a single session, better than the alleged memory OpenAI is serving via ChatGPT.
  • Genuine adaptability: real learning, not just pre-programmed responses, no going back to zero after a fer sessions because it's run out of memory or there's a new version.
  • Intentionality: knowing why it’s saying something, not just what to say.

As far as I'm aware, we don’t have any of that yet. And no one is in a rush to solve it because it doesn’t generate quick revenue. Investors want AI that scales now, not AI that spends months getting to know its users.

That's why ChatGPT feels more like an army of interns with infinite time, because mechanical, low-level tasks are easier to solve than tasks requiring an understanding of higher concepts like emotions or creativity.

3. The economics of AI are against it

Big tech companies make money when AI is general-purpose. OpenAI, Google, and Meta want a model that can be sold to everyone (businesses, creators, developers...) not a deeply-personalized system that needs to be tuned for every individual user.

Even if someone did try to build a personal AI like Samantha, the costs would be astronomical. Here’s why:

  • Storing your entire conversational history? Expensive.
  • Running a personalized model per user? Unscalable.
  • Training a model to learn about you over time? Slow and inefficient.

Right now, AI is an arms race. Companies want scale. They want billions of users, not deep connections with a handful of them.

In fact, I asked ChatGPT to produce an estimate of what this could require, and I got the following:

Time:

• MVP: 18–24 months.

• Fully functional system: 5–7 years for a production-ready system, depending on ambition and scalability.

Budget:

1. Development:

• Initial R&D: $500k–$1M.

• Ongoing development: $100k–$200k/year.

2. AI Model Costs:

• Pretrained model usage (e.g., GPT API): $10k–$50k/month depending on scale.

• Custom training: $1M+ for high-quality datasets and compute resources.

3. Infrastructure:

• Cloud hosting: $5k–$20k/month for MVP; scaling up could cost $100k+/month.

4. Team:

• Core team of AI researchers, engineers, designers, and PMs: $1M–$3M/year.

5. Data Collection and Maintenance:

• $500k–$1M/year.

Total Estimate:

• Year 1–2 (MVP): $2M–$5M.

• Long-term scaling: $10M+ over 5–7 years.

It's a bit of a stretch for me, right now, to be honest.

4. The privacy nightmare

Let’s say we could build Samantha. Would people trust it?

I would, or so I think, but if not enough people build it, no network effect is given, and thus the project is killed before it reaches critical mass.

We already have massive concerns about AI listening to us, tracking us, and profiling us. Now, imagine an AI that knows your deepest secrets, your daily habits, your dependency on your medicine, your credentials, your voice models, your suicidal thoughts, your emotional triggers. Imagine it falling into the wrong hands.

No company wants to deal with that liability.

The second a Samantha-like AI exists, regulators would jump at its neck. Data protection laws like GDPR would make it nearly impossible to implement at scale.

Or, what's even worse: the wrong people might turn up to build it because they don't care about these laws and regulations. That scares me even more.

So, what’s the future?

The AI we have today is functional, but lacks soul. It can help you draft emails, summarize articles, and even practise foreign languages. But it won’t be there for you like Samantha was for Theodore in Her.

The technology isn’t there. The economics don’t make sense. The privacy concerns are a nightmare. And, perhaps most importantly, no one in Silicon Valley is incentivized to build an AI that’s too good and be the first at it. The unknowns are so huge that we can't even begin to calculate neither the upside nor the downside of all potential scenarios.

We’ll keep getting smarter chatbots. More human-like responses. Better voice interfaces. But true, emotionally intelligent AI that grows alongside you? That’s a different game entirely.

And no one is playing it, yet.

Or...?

My projects

Gratitude & asks

  • Many thanks to Sol Vernet for sending a list of ideas for new topics to write about and feedback about this newsletter and the Foc a Terra podcast.
  • Many thanks to those that attended our monthly offline meeting to eat typical Catalan dishes for breakfast.