top of page

AI Product Management Is SaaS PM Plus Three Extra Jobs

  • Writer: Harshal
    Harshal
  • 3 hours ago
  • 3 min read

Add Observability, Evals, And Research To PM Fundamentals

I worked in AI product management at n8n and FlexAI. I learned that an AI product manager does the SaaS PM fundamental tasks, plus three extra buckets.

These are the 4 buckets of work for an AI PM:

  1. SaaS PM fundamentals

  2. Customize AI observability

  3. Define AI evals

  4. Read AI research

Buckets 2, 3, and 4 are the added responsibilities compared to a software PM.

I spent 80 minutes writing this. You need 2 minutes to read this.

AI PM has additional responsibilities over Fundamentals
AI PM has additional responsibilities over Fundamentals

Related:

1. SaaS PM fundamentals

AI PMs still owns the SaaS PM fundamental tasks. Do user research. Find gaps for customers (big or small). Prioritize the right gaps. Deep dive into each gap you plan to solve. These gaps drive your roadmap. Then work with design and engineering to shape a solution (Shape-Up) or write and execute on a Product Requirements Document (PRD).

Jobs:

  • customer research and problem discovery

  • data analysis, user interviews, and session recordings

  • triage with customer support and sales

  • prioritization (roadmap)

  • product + design + engineering collaboration to define the problem and solution

  • product launch

2. Customize AI observability

The second bucket is customizing observability for AI.

Use traditional tools (Mixpanel, Amplitude, Posthog, Datadog, Sentry). Then add what those tools miss in AI products: the full user context.

See what your users see. For example:

  • If customers build an automation, review the automation they built with your AI copilot.

  • If customers use an AI chatbot in an email platform, review the user messages, AI responses, and the surrounding screen context.

Sometimes this means building (or specifying) a new internal system. This system can look different from existing observability platforms. It may be homegrown.

3. Define AI evals

The third bucket is defining AI evals. What used to be a few bullet points in your PRD (user acceptance criteria) becomes a clear rubric.

Then run tests where an AI model grades outputs against that rubric. (This is often called “model-graded evaluation.”)

This work takes iteration. Make the grading consistent. Otherwise the grader marks the same output as a pass in one run and a fail in another.

4. Read AI research

The fourth bucket is reading AI research to stay current on the AI space. Most companies expect AI product managers to track new capabilities in commercial AI products and research papers from labs or universities. This habit can help the product team gain a competitive advantage.

For example, should you bet on adding skills or tools to your AI Agent? Should you enhance the context by adding web browsing, documentation, or sample code from some repository? Should you try different models, Context harness, or new benchmarking?

Reasons

This extra work exists because the field is still young. AI product management changes quickly, and the playbook shifts often. Spend time learning what is possible, what breaks, and what may change in the next 6–12 months. Then push that learning into your roadmap and risk decisions.

This extra work also exists because AI products behave differently from traditional software. AI evals and observability do not look like standard metrics or logging. Do not rely only on latency, uptime, and conversion funnels. Define what “good” means for model behavior. Build ways to see what went wrong for users inside the AI experience. Without this, you ship features that feel like a demo, not a dependable product.

How An AI PM Might Spend Their Time

You might wonder: how much time do these extra buckets take?

My observation:

  1. 50%: SaaS PM fundamentals

  2. 20%: Customize AI observability

  3. 20%: Define and iterate on AI evals

  4. 10%: Read AI research

Observability is often front-loaded. Although some might think Evals will be front-loaded, but you will need new evals each cycle as the product evolves and reaches new surfaces.

My observation of time allocation for AI PMs is that they add one extra job over existing one
My observation of time allocation for AI PMs is that they add one extra job over existing one

What Next

This post has one main point: when AI becomes a core part of the product, an AI PM still does the SaaS PM fundamental tasks. The role also adds three buckets: customize AI observability, define AI evals, and read AI research.

In a follow-up post, I’ll share specific tips for software PMs who want to transition into AI PM roles — including skills to build, types of projects to take on, and how to tell your story when you haven’t “owned” an AI product yet.

Once I figure it out, I'll also share how an AI PM's work can be sustainable instead of feeling like 2 jobs at once.

Related:


bottom of page