Product Research Before Product Management Interviews
- Harshal
- 6 days ago
- 7 min read
Deep-Dive Using My n8n Example in 2025
In earlier posts, I shared a framework to write a case study or product memo when you interview for Product Management roles with companies.
Here, I show an example I did for n8n a few months ago, when I interviewed with them. Since this doesn’t have n8n employee inputs, the information is not accurate, but I hope it gives you ideas on the approach. Fast-forward, I got a job offer and will join them in Q4 2025.
All the information I share here was from my own analysis, without n8n employees’ inputs.
I spent 7 hr 44 min researching and 1 hour 56 minutes writing this. You need 6 minutes to read this summary blog, excluding the linked blogs.

Related:
Go Deep For Interviews
Going deep helps differentiate you from other candidates, showcase your product management skills, and convert theoretical knowledge to practical demonstration.
You are more likely to get hired. You build a portfolio of your thinking that you can publish on your blog for future.
Read more in Cracking Startup PM Interviews By Doing A Case Study.
My Meta Insights From My Process
I learned that interview prep is not about answering questions. When I treated n8n as a project, I practiced the same steps I would use on the job - customer research, competitor testing, and building frameworks.
Going deeper forced me to structure findings, test assumptions, and explain insights clearly. I was confident approaching the interviews. This extra work made my interviews stronger.
After this research, I also felt I knew the downsides of the company or role, so I felt more comfortable accepting this (or previous) role.
Read more in Cracking Startup PM Interviews By Doing A Case Study.
Tips For Your Product Memos
Create your own artifacts (scrape forums, analyze web traffic, etc.) instead of mentioning TechCrunch’s opinion on the product.
Talk to product users or similar persona, if possible.
Test the product hands-on if you can do it within $50. If you cannot test the product, test competitors’ products.
Use simple frameworks (SWOT, Business Model Canvas) to structure thinking.
Read more in Cracking Startup PM Interviews By Doing A Case Study.
1 - Context, Goal
Goal:
Drive AI product growth within a fast-scaling, developer-focused automation company.
Desired Outcome:
Understand how n8n can enhance workflows that use AI tools to grow usage, trust, and production adoption to drive product quality and revenue.
TL; DR:
Integrate AI components into their workflows
RAG, prompt chaining, memory, and model orchestration
Evaluation, observability, and production-readiness
Balancing abstraction (e.g., model routers) with control (prompt editing, fallback logic)
2 - Identify The Users
Target Users of n8n AI Platform:
Technical builders who want visual, low-code workflow automation
AI developers exploring RAG, prompt chaining, LLM orchestration, or agentic workflows
Internal enterprise teams experimenting with AI for productivity and customer experience
Adjacent User Segments:
Non-technical colleagues
AI developers who need more control, for example developers who use MLOps platforms for fine-tuning or training.
3 - Identify User Needs
See the complete notes on user needs here: n8n Feature Improvement Ideas Using External Research
I share my approach for user needs research in this section - I use a 3-step framework and use these sources of data for n8n:
Empathy
User interviews (I could do before joining)
Sales conversations
Product immersion (I could do before joining)
Qualitative
Support tickets
Discord chatter (I could do before joining)
Reddit chatter (I could do before joining)
Github issues (I could do before joining)
Surveys
How-to tutorials by users (I could do before joining)
Sales “solution gaps” or escalations
Quantitative
User analytics
Template usage statistics (I could do before joining)
Error rates per node
Error rates per template

To understand user needs, I created a customer journey map:

See the complete notes on user needs here: n8n Feature Improvement Ideas Using External Research.
See hands-on evaluation of the product and its peers here: Hands-On Evaluation of n8n and Peer Products for AI Automations.
4a - Competitive Analysis Summary
To identify gaps, I reviewed competitors and mapped where they serve user needs better. The differences fall along several axes:
Enterprise-friendly: Tines and Shuffle. Shuffle specializes in cybersecurity workflows.
Cheaper or self-hosted: Activepieces, which appeals to developers with its MIT license. (n8n can also be self-hosted)
Robotic Process Automation (RPA) for desktop: UiPath, Blue Prism, and Automation Anywhere. The user experience of using these tools and the use-cases is different from an agentic workflow tool.
No-code or non-technical user-friendly: Zapier and Make. Zapier is strong in templates, helping users quickly discover use cases.
Smart home automation: Node-Red, a node-based tool similar to n8n, where all workflows sit on a single canvas, and is popular for smart home. It simplifies reusing workflow components.
Developer-first API workflows: Pipedream, which some competitors rely on for handling credentials.
Built using AI: LindyAI, Google Opal, and Gumloop emphasize AI-assisted building, where an AI takes a central role in speeding up workflow creation.
AI agents: AgentAI, CrewAI, Google Agentspace, Comet by Perplexity, and OpenAI’s ChatGPT Agent lean toward fully agentic workflows rather than agent-assisted ones.
4b - Identify Gaps Vs. Existing Products
I looked at where current products fall short and where friction remains. These give n8n opportunities to meet user needs and grow.
Model selection: Today, users rely on trial-and-error across model providers. What’s missing is context-aware recommendations surfaced directly in the UI.
Prompt iteration: Builders make manual changes inside visual editors, but there is no support for prompt versioning or rollback.
Productionization: Teams stitch together glue code with observability frameworks like OpenTelemetry. The cost of setting up observability for AI workflows remains high.
AI-specific visual builder: Most AI developer tools (Langchain, Haystack) are CLI- or SDK-first. Some other competitors were built with AI-specific UX that matches how practitioners actually build and test over products that were around for longer than the genAI boom.
5 - Identify Feature Improvements
Based on the above, I identified some feature improvements. I detail them here: n8n Feature Improvement Ideas Using External Research.
After this ideation, I wanted to prioritize only a few to think through their execution.
6 - Execution Plan
I struggled to detail the steps from here onwards, because I’m used to colleagues’ feedback at this point.
Here’s my idea for MVP, MLP, and some execution pointers.
My goal with the Minimum Viable Product (MVP) is to put in the minimum possible engineering effort for maximum hypothesis validation.
MVP =
Add internal evaluation APIs (simple pass/fail, token usage, latency) per workflow node
Ship beta of versioned prompts and model router node
Enable manual test suite interface in UI (step-by-step mode)
My goal for Minimum Lovable Product (MLP) is to create high retention for the feature users.
MLP =
Visual diff for prompt versions
Smart defaults for model selection
Auto-eval report at the end of each build run
I suggest these execution tactics:
Dogfood within n8n’s own AI Assistant
Collaborate with top users from the community for feedback
Launch closed beta behind a feature flag
Internally publish eval dashboards for prompts used in live automations
7 - Go-To-Market Plan
Helping your customers discover your product feature is as important as building the feature. The Go-to-market plan needs to share the right info to the right users at the right time in their journey via the right channel.
So, we start by thinking about where customers will search for solutions to their problem:
Dev platforms like GitHub, Hugging Face, Langchain Discord
Prompt engineering communities
Reddit/Hacker News/YouTube for AI tool tutorials
Now, our GTM strategy for these features relies on the above channels.
Launch “AI Reliability Suite” blog post and demo workflows
Partner with leading AI open source projects (e.g., Langchain, Ollama)
Create tutorials and templates to showcase eval + router capabilities
Collect user testimonials from community beta testers
Feature in n8n.io/ai as a new release — highlight production-grade AI.
8 - Metrics And Counter-Metrics
To measure success, I outlined a few metrics along with counter-metrics to keep the product team honest:
Workflows using AI nodes: Track how many users adopt AI features. The counter-metric is the percentage of those workflows abandoned after one week, which signals whether adoption is sticky.
Average latency per AI node: Monitor performance speed. At the same time, watch for a drop in model quality due to overly aggressive optimization.
Prompt changes per workflow: Count how often builders refine prompts. Balance this against the rate of evaluation failures after changes, which can reveal instability.
Model router usage: Measure how often users rely on automatic routing. The counter is the rate of incorrect model usage or manual overrides, showing whether users trust the system.
Community downloads of AI nodes: Track community engagement and interest. Pair this with the percentage of users needing support or reporting issues, which highlights ease of use.
9 - Evaluate Risks And Trade-Offs
Risks:
AI models are fast-changing; building deeply for one (e.g., OpenAI) could reduce flexibility. Market fragmentation: OpenAI vs Claude vs Gemini.
Evaluation and observability add complexity - risk of losing visual low-code simplicity.
Cost visibility may overwhelm early users instead of helping.
Trade-offs:
Prioritizing enterprise readiness (eval, metrics) may delay low-code usability features. Prioritizing production features may slow no-code onboarding.
Dogfooding and community co-creation reduces speed, but increases trust.
Building “router” abstraction makes user workflows easier, but hides model details power users care about.
Reflection
The research helped me approach the interviews with a lot more confidence that I am evaluating the company correctly and I am putting my best foot forward and showing my on-the-job skills.
Related: