Most B2B SaaS teams already use AI. The problem is that nobody can explain what “good use” looks like, what is banned, and how output is checked. That turns AI into shadow process.
This is not a minor ops gap. Research suggests AI can raise productivity on some tasks, but results vary by task, worker, and context. Some studies show large gains and others show slowdowns when the work is complex or the developer is already expert. You need a clear approach, not vibes. (Brynjolfsson et al., 2025), (Dell’Acqua et al., 2023), (Peng et al., 2023), (Becker et al., 2025)
The fix is a simple system
If you want help installing this, book a working session. If you want the templates and the vendor pack, get to the toolkit on Substack.
What “no clear AI approach” looks like (symptoms)
Why this goes wrong
Teams treat AI like a personal productivity hack. That fails at scale because AI creates shared risk.
A research informed guideline
Start with a task inventory, not a tool decision.
AI impact is uneven. Some work gets faster. Some work gets worse. The “jagged frontier” idea is a useful mental model. Certain subtasks are inside AI’s capability. Others are outside and need human judgment. (Dell’Acqua et al., 2023)
The Protect, Augment, Delegate model
Protect
Hard rules. No exceptions without approval.
This aligns with common data protection expectations. (ICO, 2023)
Augment
Allowed uses that make PMs better, faster, and more consistent.
This is where many teams see real gains, especially for less experienced workers or routine writing. (Brynjolfsson et al., 2025)
Delegate
Narrow uses where AI can do the first pass with required human validation.
Delegate is safest when inputs are controlled and outputs are constrained. Security frameworks like SAIF emphasize building secure by default systems and controls across the lifecycle. (Google, n.d.)
The minimum operating system you need
A practical review checklist for PM outputs
30-day rollout plan
Days 1 to 5
Inventory current AI use. Capture tools, workflows, and data touched. Identify shadow AI.
Days 6 to 10
Define Protect rules and the approved tool path. Publish a one page policy.
Days 11 to 20
Pick 3 Augment workflows to standardize. Example research synthesis, PRD drafting, launch comms drafting. Build templates and review checklists.
Days 21 to 30
Pilot with one team. Track speed, quality, and rework. Expand to the next team.
What “good” looks like in 90 days
Call to action
Book a working session to install the workflow system and the governance light enough to keep.
Get the Substack toolkit including templates, a starter prompt library, the RFI pack, and the vendor scorecard.
References
Becker, J., et al. (2025). Measuring the impact of early 2025 AI on experienced open source developer productivity. arXiv.
https://arxiv.org/abs/2507.09089
Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–942.
https://doi.org/10.1093/qje/qjae044
Dell’Acqua, F., McFowland III, E., Mollick, E. R., Lifshitz Assaf, H., Kellogg, K. C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier. Harvard Business School Working Paper 24-013.
https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf
European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence. EUR-Lex.
https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Google. (n.d.). Secure AI Framework (SAIF).
https://safety.google/intl/en_in/safety/saif/
ICO. (2023). Guidance on AI and data protection. Information Commissioner’s Office.
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
ISO. (2023a). ISO/IEC 42001 AI management systems.
https://www.iso.org/standard/42001
ISO. (2023b). ISO/IEC 23894 AI guidance on risk management.
https://www.iso.org/standard/77304.html
Lifshitz, L. R., & Hung, R. (2024). BC tribunal confirms companies remain liable for information provided by AI chatbot. American Bar Association.
https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/
Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
https://www.law.berkeley.edu/wp-content/uploads/2025/12/Mata-v-Avianca-Inc.pdf
Microsoft. (2022). Microsoft Responsible AI Standard v2 general requirements.
https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf?country=us&culture=en-us
NCSC. (2024). AI and cyber security what you need to know. National Cyber Security Centre.
https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know
NIST. (2023). Artificial Intelligence Risk Management Framework AI RMF 1.0.
https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
NIST. (2024). Artificial Intelligence Risk Management Framework Generative AI Profile. NIST AI 600-1.
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
OpenAI. (2023). March 20 ChatGPT outage.
https://openai.com/index/march-20-chatgpt-outage/
OWASP. (2025). Top 10 for large language model applications.
https://owasp.org/www-project-top-10-for-large-language-model-applications/
Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity. arXiv.
https://arxiv.org/abs/2302.06590