PM's Ultimate AI Prompt Framework
The exact framework top product teams use to generate high quality, detailed, and relevant responses
"I spent 30 minutes with ChatGPT asking for user stories. All I got was generic fluff."
That’s what a senior PM at Google told me. And it’s a familiar story across product teams everywhere.
The truth? Most PMs don’t get bad results from AI because the models are weak.
They get bad results because their prompts are.
In this article, I’ll walk you through the exact 5 Keys Framework top PMs use to turn AI into a high-performing thinking partner. One that delivers relevant, specific, and immediately useful output.
No hype. No fluff. Just a simple, repeatable way to get the most out of every prompt.
The 5 Keys Framework for Better AI Prompts
High-quality results start with high-quality questions.
Here are the 5 Keys top PMs use to get consistently better responses from AI:
1. Assign AI a Role
AI performs better when it knows who it's supposed to be.
Instead of treating the model like a blank slate, assign it a specific role, with both the relevant experience and expertise for the task.
For example:
“You are the most experienced person when it comes to building great products. Act like an expert in ideating product features.”
“You are the most experienced person when it comes to designing great user experiences. Act like an expert in creating user research strategies.”
“You are the most experienced person when it comes to marketing B2B SaaS products. Act like an expert in generating a GTM strategy.”
Why this works:
Assigning a role anchors the model’s responses. It gives it a point of view, so it responds like a thought partner, not just a tool that predicts the next word.
2. Provide the Right Input
AI can only work with the information you give it.
When you provide the bare minimum, the model is forced to guess. Instead, give it just enough context to generate something useful and relevant.
For example:
If you want to brainstorm features, include a product description, target user, and desired outcome.
If you want to write OKRs, share your organization’s goals, focus areas, and time frame.
If you want to create survey questions, provide a target user, the core problem, and your working hypothesis.
Why This Works:
Without context, the model guesses. And it usually guesses wrong.
3. Give Specific Instructions
Next you need to tell AI exactly what to do with the above context.
This is where most prompts fall short. Simple requests lead to generic results. The more specific your instructions, the less cleanup you’ll need.
For example:
Don’t just tell the LLM to “write 10 user stories.” That’s too vague to be useful.
Instead, direct it with something like:
Generate a list of 10 unique user stories.
Make sure each story is independent and can be developed and tested separately.
The user stories should be detailed and prioritize user needs and benefits.
Maintain a user-centric perspective at all times.
Avoid technical jargon and implementation details.
Why this works:
Specific instructions set clear expectations. So the model focuses, not guesses.
4. Specify the Format
Formatting helps you control the shape and structure of the output.
If you want a user story, feature, or sentiment analysis to come back in a specific format—this is where you define it.
For example:
User stories:
Format as: “As a [user], I want [goal], so that [benefit].”
Feature list:
Format as:
Feature [#]: [Feature Name]
Benefit: [Benefit]
Sentiment analysis:
Format as:
Positive Feedback
[Summary #1, 60 characters] – [Mention Count]
[Summary #2, 60 characters] – [Mention Count]
...
Negative Feedback
[Summary #1, 60 characters] – [Mention Count]
[Summary #2, 60 characters] – [Mention Count]
Why this works:
Format forces the model to respond the way you want. They can make the output easier to scan, compare, and take action on.
5. Provide Examples
Finally, use an example to lock everything together.
Think of it as training data. If you’ve got a strong idea of what “great” looks like, share it with the LLM.
Examples could include:
A few sample user stories
A great summary from a previous meeting
A tone/style snippet from your last launch note
Why It Works:
Examples clarify expectations and reduce randomness. They help the model hit closer to the mark on the first shot.
Not Sure If This Will Deliver High Quality Responses? Try It Yourself.
Let’s run a quick experiment.
Goal: Write detailed, relevant, and specific user stories.
Approach: Compare a basic prompt to a well-structured one using the 5 Keys framework.
Try generating user stories using each version below. The difference will speak for itself.
Here’s a simple prompt:
Please generate 10 user stories for a <product / feature>
This is how most people prompt. It’s quick—but it’s also vague, and leads to generic, low-context results.
Now try the well-structured version:
Role:
You are the most experienced person when it comes to building great products. Act like an expert in generating user stories. When you respond, your tone is warm, friendly, smart, and direct to the point.
Input:
Please ask these questions before you proceed.
Product Description: What is the main purpose of the product? How does it improve the user experience or solve a key problem? Desired outcome?
Feature Description: High level description of the feature and user benefit
User: Who is the primary user of this feature? 1 user at a time.
User Pain Points / Emotions: What frustrations, concerns, or emotions does the user experience related to this process?
Instructions:
Generate a list of 10 unique user stories.
Make sure the user story is independent and can be developed and tested separately.
The user stories must be detailed, prioritize user needs and benefits.
Maintain a user-centric perspective at all times. Avoid technical jargon and implementation details.
Provide only the list of user stories.
Format:
For your response, please provide the user story using the following format:
As a [user], I want [goal] so that [benefit].
Good Example:
As a sales person, I want to use built-in RFP templates, so I can quickly create responses and save time.
Run both prompts. Compare the results.
If you want something fast and surface-level, the simple prompt will do.
But if you want responses that are useful, specific, and ready to build on—the well-structured prompt wins every time.
Here’s a detailed side-by-side comparison of the two approaches—captured by one of my PMs after running both prompts on the same task
Bottom Line
Most PMs settle for vague, generic responses. And assume the model is the problem.
It’s not. It’s the question.
How you ask is everything.
The 5 Keys Framework gives you a simple, repeatable way to get results that are clear, relevant, and immediately useful—every time.
And when you do?
AI stops being just a tool.
It becomes a partner that boosts your productivity, levels up your effectiveness, and makes you a better PM.
AI is Reshaping Product Management—Are You Ready?
CPOs who lean in are already seeing:
20%+ productivity gains
Improved time to market
Bigger impact
🚀 Curious what this could look like in your org? Let’s chat.