Skip to content
Product Managers doing their own thing with AI
ACADEMIC & TEACHING (Practice) CPO/Head of Product (Audience) CTO/VP Eng (Audience)

Most PM Teams still don't have Clear AI Product Workflows

Ken Pulverman |

Most B2B SaaS teams already use AI. The problem is that nobody can explain what “good use” looks like, what is banned, and how output is checked. That turns AI into shadow process.

This is not a minor ops gap. Research suggests AI can raise productivity on some tasks, but results vary by task, worker, and context. Some studies show large gains and others show slowdowns when the work is complex or the developer is already expert. You need a clear approach, not vibes. (Brynjolfsson et al., 2025), (Dell’Acqua et al., 2023), (Peng et al., 2023), (Becker et al., 2025)

The fix is a simple system

  • Protect Workflows - Define what AI must never see and where AI must never decide.

  • Augment Workflows - Define where AI speeds up work while the PM stays accountable.

  • Delegate Workflows - Define the narrow cases where AI can draft, triage, or summarize with a required human check.

If you want help installing this, book a working session. If you want the templates and the vendor pack, get to the toolkit on Substack.

What “no clear AI approach” looks like (symptoms)

  • Each PM uses different tools. Nobody knows where data went.
  • Prompts live in private notes. Best practices do not spread.
  • AI output gets pasted into tickets and docs with no review trail.
  • Quality swings. Leaders stop trusting product docs.
  • Legal and security find out after something went wrong.
  • The team confuses speed with progress.

Why this goes wrong

Teams treat AI like a personal productivity hack. That fails at scale because AI creates shared risk.

  • Data risk
    Tools can expose sensitive inputs through misuse or flaws. Even reputable providers have had incidents. (OpenAI, 2023)
  • Security risk
    Prompt injection and insecure output handling are common failure modes in LLM apps. (OWASP, 2025)
  • Compliance risk
    Regulation is now explicit in some markets. You need to know what systems fall into which bucket. (European Union, 2024)
  • Business risk
    If AI drafts requirements, pricing copy, or customer answers without checks, you can create real liability. Courts and tribunals have already treated chatbot output as the company’s responsibility. (Mata v. Avianca, 2023; Lifshitz & Hung, 2024)

A research informed guideline

Start with a task inventory, not a tool decision.

AI impact is uneven. Some work gets faster. Some work gets worse. The “jagged frontier” idea is a useful mental model. Certain subtasks are inside AI’s capability. Others are outside and need human judgment. (Dell’Acqua et al., 2023)

The Protect, Augment, Delegate model

Protect
Hard rules. No exceptions without approval.

  • Never paste secrets, customer data, or unreleased financials into public tools.
  • Ban AI from making final decisions on pricing, legal language, or commitments.
  • Require a disclosure tag in docs when AI was used for drafting.

This aligns with common data protection expectations. (ICO, 2023)

Augment
Allowed uses that make PMs better, faster, and more consistent.

  • Summarize research notes into themes, then verify against the raw notes.
  • Draft PRD sections, then run a review checklist before sharing.
  • Turn raw support tickets into problem clusters, then spot check samples.

This is where many teams see real gains, especially for less experienced workers or routine writing. (Brynjolfsson et al., 2025)

Delegate
Narrow uses where AI can do the first pass with required human validation.

  • Draft release notes from structured change logs.
  • Create a first cut FAQ from an approved knowledge base.
  • Triage inbound requests into categories for a PM to confirm.

Delegate is safest when inputs are controlled and outputs are constrained. Security frameworks like SAIF emphasize building secure by default systems and controls across the lifecycle. (Google, n.d.)

The minimum operating system you need

  1. One approved path - One or two approved tools. One approved way to access them. If the default path is hard, shadow AI wins.
  2. A risk lens that matches real frameworks - Use an AI risk framework to organize controls. NIST AI RMF and the GenAI profile are practical starting points. (NIST, 2023; NIST, 2024)
  3. A lightweight management system - If you want a formal standard, ISO has an AI management system standard and risk guidance. (ISO, 2023a; ISO, 2023b)
  4. A security threat model for LLM use - Treat prompt injection, data leakage, and insecure tool use as first class threats. Use OWASP Top 10 as the common language between product and security. (OWASP, 2025)
  5. A human review gate - Make review explicit. “AI drafted” is not a quality claim. It is a risk flag.

A practical review checklist for PM outputs

  • Source check - Can I point to the inputs that justify this claim
  • Constraint check - Did it follow the product constraints and policy boundaries
  • Reality check - Did a domain owner sign off on anything legal, financial, or contractual
  • Customer check - Is this grounded in real user evidence or just plausible text

30-day rollout plan

Days 1 to 5
Inventory current AI use. Capture tools, workflows, and data touched. Identify shadow AI.

Days 6 to 10
Define Protect rules and the approved tool path. Publish a one page policy.

Days 11 to 20
Pick 3 Augment workflows to standardize. Example research synthesis, PRD drafting, launch comms drafting. Build templates and review checklists.

Days 21 to 30
Pilot with one team. Track speed, quality, and rework. Expand to the next team.

What “good” looks like in 90 days

  • Every PM can explain Protect, Augment, Delegate for their work.
  • Approved tools cover 80 percent of use.
  • Product docs show higher consistency and fewer reversals.
  • Security and legal can answer where data went.
  • Leaders trust product artifacts again.

Call to action

Book a working session to install the workflow system and the governance light enough to keep.
Get the Substack toolkit including templates, a starter prompt library, the RFI pack, and the vendor scorecard.

 

References

Becker, J., et al. (2025). Measuring the impact of early 2025 AI on experienced open source developer productivity. arXiv.
https://arxiv.org/abs/2507.09089

Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–942.
https://doi.org/10.1093/qje/qjae044

Dell’Acqua, F., McFowland III, E., Mollick, E. R., Lifshitz Assaf, H., Kellogg, K. C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier. Harvard Business School Working Paper 24-013.
https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf

European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence. EUR-Lex.
https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Google. (n.d.). Secure AI Framework (SAIF).
https://safety.google/intl/en_in/safety/saif/

ICO. (2023). Guidance on AI and data protection. Information Commissioner’s Office.
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

ISO. (2023a). ISO/IEC 42001 AI management systems.
https://www.iso.org/standard/42001

ISO. (2023b). ISO/IEC 23894 AI guidance on risk management.
https://www.iso.org/standard/77304.html

Lifshitz, L. R., & Hung, R. (2024). BC tribunal confirms companies remain liable for information provided by AI chatbot. American Bar Association.
https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/

Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
https://www.law.berkeley.edu/wp-content/uploads/2025/12/Mata-v-Avianca-Inc.pdf

Microsoft. (2022). Microsoft Responsible AI Standard v2 general requirements.
https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf?country=us&culture=en-us

NCSC. (2024). AI and cyber security what you need to know. National Cyber Security Centre.
https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know

NIST. (2023). Artificial Intelligence Risk Management Framework AI RMF 1.0.
https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

NIST. (2024). Artificial Intelligence Risk Management Framework Generative AI Profile. NIST AI 600-1.
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

OpenAI. (2023). March 20 ChatGPT outage.
https://openai.com/index/march-20-chatgpt-outage/

OWASP. (2025). Top 10 for large language model applications.
https://owasp.org/www-project-top-10-for-large-language-model-applications/

Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity. arXiv.
https://arxiv.org/abs/2302.06590

Share this post