So You Want to Be an AI PM
There’s no such thing. Here’s what you actually need to know.
Last week I stood in a classroom at Harvard Business School, helping Professor Sara Torti lead a case discussion with MBA students preparing for Product Management careers. The case forced students to think through common tradeoffs when designing AI-based products. The room was full of future PMs wrestling with a question I face daily at Google: when does AI actually solve a problem, and when is it just a shiny hammer looking for a nail?
I walked away thinking about how much the PM interview landscape has changed. And how much it hasn’t.
This is a topic I’ve spent a lot of time on. At Google, I’ve served on the Product Management hiring committee for more than ten years. I’ve interviewed hundreds of PM candidates. I started a program called Path to PM to help people inside Google transfer into product roles. That program has since grown into one of the company’s major talent pipelines. Today I serve as a hiring lead on the PM Steering Committee.
For years I’ve helped run Google’s PM MBA internship program. Not just because I’m a PM with an MBA. I do it because I believe in recruiting smart, ambitious, multi-faceted people into this career. I know from my own time at MIT Sloan that business school is full of brilliant, capable people with the potential to lead. The challenge is helping them see that potential and channel it into effective product work.
So when Professor Torti asked me to help teach her class, I jumped at it. These students are exactly who I want entering the field with open eyes, armed for impact.
There Is No Such Thing as an AI PM Interview
Here’s the truth I shared with those students: there is no such thing as an AI-specific PM interview anymore. Every interview from here on out is an AI interview. The technology has become too fundamental to treat as a specialty. Hiring managers are not looking for candidates who can recite model architectures. They want people who can identify where AI creates outsized impact versus where simpler solutions work better.
The trap most candidates fall into is starting with the technology. They hear “AI” and immediately jump to solutions. The best candidates do the opposite. They start with user problems, then work backward to determine whether AI is the right lever. Sometimes it is. Sometimes a rules-based system or a well-designed workflow accomplishes the same goal at a fraction of the cost and complexity.
The VALUE Framework
When approaching any AI product question, I use a framework I call VALUE. It forces you through the five layers of reasoning that separate strong AI product thinking from “solution in search of a problem” thinking.
V = Value Proposition (The Why AI)
Do not start with the technology. Start with the user’s pain. AI is a powerful tool, but it is not a product. Your job as a PM is to be ruthless about one question: is AI the only or best way to solve this user problem?
Ask yourself: What specific problem am I solving? Why is this hard to solve without AI? If the AI worked perfectly, what would the user actually see? What is the minimum viable version?
The biggest cause of failure in AI initiatives is what I call the “Solution in Search of a Problem.” Teams fall in love with a new model and spend months building something that fails to deliver real user value. I have made this mistake myself. The excitement about what the technology can do blinds you to whether anyone actually needs it to do that thing.
A = Assets (The Data Strategy)
A world-class model fed garbage data produces garbage results. A simple model fed rich, clean, representative data can be remarkably powerful. Your data strategy is your product strategy.
Ask yourself: Do we have the data? If not, how do we get it? Is it the right data? Is it representative? Is it labeled? What biases might be hiding in it?
Too many PMs treat data as an IT problem. They hand it off to backend teams and focus on the “product” parts. This is a mistake. When you hand off the data strategy, you hand off your product’s future. The PM who does not obsess over data has forfeited control over model quality, user experience, and long-term differentiation.
L = Logic (The Model Strategy)
The model is the engine that makes predictions. As a PM, you define what kind of engine you need and which tradeoffs matter most. Every model choice encodes priorities across cost, speed, accuracy, and explainability.
Several of the key tradeoffs to navigate:
Cost vs. Speed. Can a smaller model meet the user need? If so, start there. How much latency can the user tolerate? That tolerance gives you room to manage costs.
Accuracy vs. Explainability. If your product requires transparency into why a decision was made, complex ML may not be the right choice. Sometimes a simpler rules-based system serves users better.
Build vs. Buy. The cost to develop and maintain a custom frontier model is extraordinary. Before committing to build, evaluate whether existing foundation models can be adapted to your needs. And remember that “build” is never a one-time decision. It commits you to ongoing retraining, infrastructure, and maintenance.
Perfect Now vs. Improving. AI evolves fast. Designing around today’s limitations means building for obsolescence. Your goal is not perfection today. It is alignment with where the technology and user expectations will be when you launch and scale.
The common mistake here is overinvesting in model sophistication before validating user value. Spending months optimizing a model in a lab environment yields nothing if the underlying product concept does not matter to users. Start with the simplest model that works. Test it with real users. Prove the concept has value before you optimize.
U = User Trust (The User Experience)
You are not designing a static button. You are designing a relationship with a probabilistic system. The entire UX must manage uncertainty.
Ask yourself: How do we set expectations? How do we communicate confidence? How do we handle being wrong? How do we explain the “why” behind decisions?
The trap is building a “black box” interface. If users do not understand why the AI did something, they feel a loss of control. Loss of control breeds distrust. Users who do not trust your product will churn. Effective AI product design communicates reasoning, conveys confidence levels, and acknowledges uncertainty. That transparency builds the trust that keeps users coming back.
E = Evolution (The Feedback Loop)
Your product should learn from every single user interaction. This is what closes the loop and creates the data flywheel that compounds your advantage over time.
Ask yourself: How do we capture user feedback? How does that feedback get back to the model? How do we monitor for failure? What is our retraining cadence?
The mistake is forgetting to build the return path for data. Imagine you ship a v1 product. Users hate the recommendations. But you built no mechanism to learn why. Without that feedback loop, your product stays static. Errors persist. Performance degrades. The products that win are the ones that treat every user correction as training data for the next version.
How to Sink Your Interview
I have seen brilliant candidates tank interviews by making predictable mistakes. The “AI Hammer” starts with a solution before defining the problem. The “Magic Wand” assumes AI can do things well beyond current capabilities. The “Perfect Path” designs only for when everything works and ignores failure modes.
The most common failure mode? What I call the “ML Engineer Monologue.” Candidates get so deep into implementation details that they forget to talk about users, value, and product tradeoffs. Hiring managers want to see product judgment, not technical depth for its own sake.
The PM Role Is Changing. That’s Your Opportunity.
Many of the HBS students I met were anxious about whether PM roles would even exist in five years. I understand the concern. AI is changing how products get built. But I believe the anxiety is misplaced.
Consider how the traditional PM role worked. Three circles: Product, Engineering, Design. The PM overlapped slightly with each, translating between domains and keeping the system coherent. AI is collapsing those circles. PMs must now be better at engineering and better at design than ever before. Not to replace engineers and designers. To collaborate with them at a higher level of abstraction.
This is not a threat. It is an opportunity. Multi-faceted people are more valuable than ever. The best PMs have always been generalists who could go deep when needed. That value is only increasing.
Here is what I tell career switchers: your previous experience is not a liability. It is a superpower. The consultant who understands how organizations actually make decisions. The engineer who can evaluate technical tradeoffs without hand-holding. The finance professional who thinks in systems and incentives. Learn the craft of PM and complement it with what you already bring. The unique combination you bring to the role is rare.
What Actually Matters
I think back to my own experience at Google Labs. Early in my tenure I pitched a product called VoiceFX. It was a technically impressive voice synthesis tool that would help creators produce voiceovers. Leadership rejected it. Not because the technology was bad. Because it was a “solution in search of a problem.” A thin wrapper over a model that lacked a defensible moat.
They were right. I had fallen into the exact trap I now warn others about.
The lesson stayed with me. Good AI judgment is not about knowing what is technically possible. It is about knowing when technology serves users and when it just serves our excitement about the technology itself. That judgment is what hiring managers are evaluating. It is also what separates products that matter from products that just demo well.
The Message I Left With Those Students
Many of them worried whether PM would survive the AI transformation. I firmly believe PMs are becoming more essential, not less. But the job is evolving in critical ways.
My advice to you: Be the multi-faceted person without a sense of entitlement who knows how to get shit done. Understand technology deeply enough to make sound tradeoffs. Care about users enough to resist building things just because you can. Stay humble about what AI can and cannot do. Bring your whole self to the job. Care and stay curious.
That recipe defines the best PMs I have worked with. It is also a recipe for lasting impact that only grows as AI becomes more capable.
The students at HBS had impressive resumes and sharp questions. I suspect many of them will be exceptional PMs. The ones who succeed will be those who remember that AI is a tool, not a product. The magic is never in the model. What matters is what the model enables for real people solving real problems.
This entire thesis connects to my 2026 predictions, including the rise of the “full-stack designer” and the broader collapse of traditional role boundaries in tech.



