<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Product Management: A World Beyond AI]]></title><description><![CDATA[✍️ About me & this publication
Building AI inside enterprises or for enterprises? You’re in the right place. All about internal AI Product Management.  This is beyondAI by me, JaserBK. ]]></description><link>https://www.jaserbk.com</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 16:22:30 GMT</lastBuildDate><atom:link href="https://www.jaserbk.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[JaserBK]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jaserbk@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jaserbk@substack.com]]></itunes:email><itunes:name><![CDATA[JaserBK]]></itunes:name></itunes:owner><itunes:author><![CDATA[JaserBK]]></itunes:author><googleplay:owner><![CDATA[jaserbk@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jaserbk@substack.com]]></googleplay:email><googleplay:author><![CDATA[JaserBK]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[#55 - AI Strategy Through the Eyes of an AI Product Manager]]></title><description><![CDATA[From building AI products to shaping AI strategy]]></description><link>https://www.jaserbk.com/p/55-ai-strategy-through-the-eyes-of</link><guid isPermaLink="false">https://www.jaserbk.com/p/55-ai-strategy-through-the-eyes-of</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 22 Feb 2026 15:10:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JmBy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JmBy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JmBy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JmBy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1671918,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/188798239?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JmBy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JmBy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23cd191d-8817-47c9-b964-dd2ccb23596f_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the last ten years, my work has slowly evolved from pure AI Product Management into a role that increasingly lives at the intersection of AI Strategy and AI Product Strategy. The transition did not feel like a deliberate move toward strategy. It felt more like following the problems wherever they led me.</p><p>In the beginning, my world was clear. Understand users. Shape solutions. Deliver value. Improve adoption. Measure outcomes. Like many product managers, I believed the hardest challenges lived inside the product itself. If we understood the problem deeply enough and built well enough, success would follow.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But over time, something kept happening. Good ideas struggled. Strong teams delivered promising AI solutions that never scaled. Technically sound systems disappeared after pilots. And the reasons were rarely technical.</p><p>I slowly realized that many AI problems were not product problems at all.</p><p>They were strategic ones.</p><p>Most of my experience comes from building AI inside large enterprises. Organizations with thousands of employees, deeply interconnected processes, legacy systems, governance obligations, and competing priorities. AI behaves very differently in this environment compared to startups. In startups, the central challenge is usually product&#8211;market fit. In enterprises, the challenge is often something else entirely. The question is less whether AI works, and more whether the organization is capable of letting AI work. I increasingly began to think of this as product&#8211;organization fit.</p><p>At that time, strategy still felt abstract to me. I associated it with executive discussions and long planning cycles, slightly detached from the reality of building products. Product work felt tangible. Strategy felt distant.</p><p>It took years to understand that strategy is not removed from execution. Strategy is an attempt to understand reality deeply enough so execution has a chance to work.</p><p>Today, I see strategy as a set of beliefs.</p><p>Strategy is a collection of convictions about how progress actually happens.</p><p>Every organization carries an idea of a future it wants to reach. Strategy lives between today&#8217;s constraints and that future ambition. It tries to answer a deceptively simple question:</p><div class="pullquote"><p>What must be true for us to get there?</p></div><p>Only later did I realize that this understanding closely aligns with how thinkers like Richard Rumelt describe strategy, not as planning activity but as diagnosing reality and choosing coherent action based on that diagnosis.</p><p>Once I began thinking about strategy this way, something unexpected became obvious.</p><p>The best product managers I had worked with were already doing strategy work long before anyone called it strategy.</p><p>Great product managers constantly form beliefs. They observe users and conclude that adoption matters more than feature completeness. They recognize that workflow integration beats technological sophistication. They learn that solving one painful problem creates more value than delivering many impressive capabilities. They understand that timing inside an organization often matters more than innovation itself.</p><p>These are not execution decisions. They are strategic judgments.</p><p>Modern product thinking, shaped by voices such as Marty Cagan and Teresa Torres, increasingly frames discovery itself as decision-making under uncertainty. Product managers constantly test beliefs against reality. Products punish wrong assumptions quickly. Adoption declines. Value disappears. Reality corrects you.</p><p>Product management, at its best, is strategy operating under fast feedback loops.</p><p>The transition into AI Strategy did not replace this mindset. It expanded it.</p><p>Instead of only understanding users, I began trying to understand organizations. Instead of asking why a product succeeds, I asked why organizations repeatedly struggle to let good products succeed.</p><p>Strategy increasingly felt less like planning and more like sense-making. Organizational theorists such as Karl Weick describe organizations as systems constantly interpreting reality rather than executing perfectly designed plans. That description resonated deeply with what I observed in enterprise AI environments.</p><p>This distinction becomes especially visible in AI.</p><p>In startup environments, product strategy and company strategy often evolve together because organizations are small and adaptable. In enterprises, these layers separate. Organizational complexity introduces friction long before product quality becomes the deciding factor. This is why AI Strategy becomes necessary even before individual AI products can succeed.</p><p>An AI Product Strategy emerges from discovery. We study how work is actually done, where friction exists, and what outcomes matter. We form beliefs about how a specific AI product creates value. These beliefs translate into product principles, experiments, epics, and roadmaps. The goal is clear: build something people adopt and trust.</p><p>AI Strategy asks a different question entirely:</p><p><em>How must the organization evolve so AI products can succeed repeatedly?</em></p><p>Here, discovery shifts from users to systems. We observe data accessibility, governance structures, delivery models, incentives, organizational maturity, and cultural readiness. We form beliefs about how AI value can realistically emerge within that environment.</p><p>One strategic belief that fundamentally changed my perspective was simple:</p><p><strong>AI development must be treated as product development, not as one-time projects.</strong></p><p>While this realization may sound simple and almost obvious to experienced product managers, its implications are far-reaching. Once we understand that many AI challenges are strategic rather than technical, the perspective changes. The question is no longer how to build better models, but how organizations must evolve to let those models create value. Decisions around funding, governance, ownership, and delivery suddenly look different. AI stops being a project to deliver and becomes a capability to nurture. Success moves away from completion toward sustained adoption, and learning becomes continuous rather than episodic. AI product teams therefore cannot disappear after go-live. They must remain with the product, continuously learning, improving, and evolving it instead of moving immediately to the next promising AI use case.</p><p>Another belief follows almost inevitably. If AI products evolve continuously, organizations must continuously discover AI opportunities. Innovation cannot depend on isolated initiatives. It requires structured observation and prioritization. From this belief emerges capabilities such as scalable AI Opportunity Management. Not as temporary programs, but as permanent organizational muscles connecting strategy with product creation.</p><p>This is where strategy becomes beautiful to me.</p><p>Strategy is not about inventing actions. It is about understanding consequences. When beliefs are coherent, organizations begin aligning almost naturally. Structures reinforce direction. Investments gain meaning. Decisions stop competing and start supporting each other.</p><p>At the same time, strategy work is difficult because organizations are complex systems. Improving one area often creates pressure elsewhere. Automating a workflow may accelerate one team while overwhelming another. A technically strong AI product can fail because governance was not prepared. A well-articulated AI Strategy can exist without impact if no meaningful products ever emerge.</p><p>Strategy requires living with ambiguity longer than product work usually demands. Feedback loops slow down. Impact becomes indirect. Instead of shipping features, you shape conditions. Instead of solving a single problem, you influence how many problems can be solved in the future.</p><p>Stepping into an AI Product Strategy Lead role therefore did not mean moving away from products. Quite the opposite.</p><p>The role lives exactly between strategy and product execution.</p><p>On one side, it contributes to AI Strategy by helping define organizational beliefs, capabilities, and directions that allow AI to scale responsibly and effectively. On the other side, it stays deliberately close to AI products themselves. Guiding product teams. Challenging assumptions. Supporting AI Product Managers in strengthening their product strategies. Ensuring discovery remains grounded in reality. Translating organizational ambition into adoptable solutions.</p><p>The role constantly translates in both directions.</p><p>Strategy informs products.</p><p>Product reality informs strategy.</p><p>AI Strategy without proximity to products becomes theoretical.</p><p>AI Product Strategy without organizational alignment becomes fragile.</p><p>My work increasingly focuses on connecting both worlds.</p><p>This connects deeply to something personal for me. My LinkedIn profile background carries the sentence:</p><p><strong>Shaping a future where good AI products don&#8217;t fail for the wrong reasons.</strong></p><p>That sentence expresses how I understand strategy today.</p><p>Most AI products do not fail because models are weak. They fail because organizations were not ready. Because incentives discouraged adoption. Because discovery was skipped. Because governance arrived too late. Because strategy and product evolved separately instead of together.</p><p>Looking back, I do not see a transition away from product management. I see an expansion of product thinking itself. The same curiosity that once helped understand users now helps understand systems. The same discovery mindset now applies to organizations.</p><p>Seeing AI Strategy through the eyes of an AI Product Manager ultimately means refusing to separate strategy from products. It means staying close enough to real work to remain grounded, while thinking broadly enough to shape the environment in which that work succeeds. Strategy then stops being an abstract exercise and becomes a continuous dialogue between organizational direction and product reality. Every strategic belief must eventually prove itself in adoption, usage, and sustained value creation. And every product insight becomes an input to strategy itself, shaping how the organization learns and evolves.</p><p>In the end, AI Strategy is not about defining a future from a distance. It is about creating the conditions in which good AI products are given a real chance to succeed.</p><p>JBK &#128330;&#65039;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#53 - We Are Partners in CrAIme.]]></title><description><![CDATA[Why Internal AI Products Only Succeed When We Build Them Together]]></description><link>https://www.jaserbk.com/p/53-we-are-partners-in-craime</link><guid isPermaLink="false">https://www.jaserbk.com/p/53-we-are-partners-in-craime</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 03 Aug 2025 11:08:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mT7B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI </strong></p><p>When we begin building an AI product inside a company, especially one aimed at solving real problems for real teams, it&#8217;s easy to assume that everyone is already aligned - after all, they brought the idea, or they nodded to ours. But what I&#8217;ve learned again and again is that even when everyone&#8217;s nodding in the same direction, their understanding of what happens next can be worlds apart.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Some teams believe that their job is done once they&#8217;ve shared the idea. Others believe that saying yes opens the door to unlimited feature requests, as if the AI product team were now an internal agency. Both perspectives are understandable, and both will quietly derail an AI product if we don&#8217;t correct course early.</p><p>The truth is, getting from idea to a mature, usable product is not a handoff. It&#8217;s a journey. A long, effortful one. And the moment someone says &#8220;yes,&#8221; the real work begins. First, we have to understand the problem deeply enough to run a Quick Assessment. If it&#8217;s promising, we move into a Detailed Assessment, which means checking data availability, defining success, shaping the first version, estimating costs, and aligning across different teams and layers of the organization. Even then, we&#8217;re just getting started.</p><p>Once we begin building, we step into an entirely different reality: turning intent into experience, building something that works in practice, with all its edge cases, user behaviors, and change management needs. And then, even after the product goes live, the journey continues. Every new feature idea, every new user group, every seemingly small request must go through some form of structured evaluation. AI products are never really finished. They evolve. And with each evolution, complexity grows. That&#8217;s why it&#8217;s so important to make expectations transparent early on.</p><p>This isn&#8217;t about bureaucracy. It&#8217;s about clarity. If we don&#8217;t name the expectations, we leave too much room for misunderstanding. The product team becomes reactive. Stakeholders become frustrated. Strategic focus slips. But if we do make things transparent, if we show what this journey really involves, and who needs to be involved at each stage, we create shared ownership. We give everyone a map, not just a promise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mT7B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mT7B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mT7B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1934943,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/169973517?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mT7B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!mT7B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcaa756c-ce09-4de9-9879-a5f7ab36128c_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The Reality of the Internal AI Product Journey</strong></h2><p>Building an AI product is not a single event. It&#8217;s not an app you build in a hackathon and hand over. In practice, AI product development has many gates, each requiring decisions, commitment, and careful attention.</p><ul><li><p>We start with Problem Discovery, where we try to understand if the problem really needs an AI solution - or if it&#8217;s a process, policy, or visibility issue in disguise.</p></li><li><p>Then comes the Quick Assessment, where we decide whether it&#8217;s worth exploring further based on impact potential, data availability, and feasibility.</p></li><li><p>If it&#8217;s still promising, we enter the Detailed Assessment, where we estimate the cost of development and ownership, build a lightweight prototype, and align with legal, security, and governance partners.</p></li><li><p>And if the product idea passes all this, we enter the 0&#8594;1 phase. The first real attempt to deliver value through a working product.</p></li><li><p>But even when the product is live, it&#8217;s not over. Every new request, every additional user group, every iteration - it all requires us to revisit what we&#8217;ve built and make sure it still holds.</p></li></ul><p>AI products are not static. They are living solutions that grow with the business, and so the people involved must grow with them. Which brings us to a conversation we want to have with every team we build for.</p><h2><strong>A Letter to Our Stakeholders</strong></h2><p><strong>We Are Partners in Crime, Which Turned Out to Be AI</strong></p><p>Dear Colleagues,</p><p>When we agreed to explore an AI solution together, we didn&#8217;t just start a project. We entered a kind of partnership - not the ceremonial kind, but the kind that unfolds through real work, shared problem-solving, and the occasional chaos that comes with doing something new inside a big company. It turns out, our &#8220;<em>crime</em>&#8221; is AI. But the story we&#8217;re writing together is really about something more human: <em>solving meaningful problems, with all the constraints and complexity that come with working in the real world.</em></p><p>So before we go any further, we&#8217;d like to be clear with you - not just about what to expect from us, but also what we&#8217;ll need from you. Because AI product development isn&#8217;t a service. It&#8217;s a collaboration. And the success of what we&#8217;re building depends on how well we stay aligned.</p><p><strong>What You Can Expect From Us</strong></p><ul><li><p><em>A structured approach.</em> We won&#8217;t throw tech at your problem. We&#8217;ll take it seriously, understand it deeply, and explore whether AI is the right fit before we commit to building.</p></li><li><p><em>Clear steps.</em> From Quick Assessment to Detailed Assessment, and later into development, we&#8217;ll guide the process and make sure you always know where we are and what&#8217;s coming next.</p></li><li><p><em>A real product mindset.</em> If we agree to build something, it won&#8217;t be a one-off experiment. We aim to build something usable, valuable, and maintainable.</p></li><li><p><em>Challenge and honesty.</em> If your expectations are too high, or the timeline unrealistic, or if we believe the solution you have in mind won&#8217;t work, we&#8217;ll tell you. Respectfully, but clearly. We owe you that.</p></li><li><p><em>A team that listens.</em> We won&#8217;t disappear into our backlog. We&#8217;ll involve you in the right moments, bring you into design and testing, and keep your voice part of the journey.</p></li></ul><p><strong>What We&#8217;ll Expect From You</strong></p><ul><li><p><em>Active participation.</em> The discovery phase needs your time, your business logic, your data, and your real use cases. We can&#8217;t invent those on our own.</p></li><li><p><em>Commitment beyond the idea.</em> Saying yes to an idea is just the start. We&#8217;ll need you in refinement, decision-making, and especially during the early product iterations.</p></li><li><p><em>Openness to the unknown.</em> Not everything will go as planned. We&#8217;ll need space to experiment, sometimes fail fast, and adapt. That&#8217;s part of the process, not a sign of failure.</p></li><li><p><em>Shared prioritization.</em> Once live, your team might come with new ideas, changes, or user groups. That&#8217;s good. But each one needs to be assessed, prioritized, and resourced. We&#8217;ll need your help to make smart decisions.</p></li><li><p><em>Executive sponsorship.</em> If you&#8217;re not the decision-maker, we&#8217;ll need your leadership to help align with those who are. This kind of product journey works best when it&#8217;s visible and supported from the top.</p></li></ul><p>This letter isn&#8217;t a contract, and it&#8217;s not a checklist either. It&#8217;s an invitation to approach this like a joint venture. </p><div class="pullquote"><p>We don&#8217;t build AI products for you. We build them with you. </p></div><p>And if we do this right, the result won&#8217;t just be another internal tool. It will be a real capability that solves something meaningful in your world, and keeps delivering long after we&#8217;ve shipped the first version.</p><p>Let&#8217;s stay close. Let&#8217;s be honest with each other. Let&#8217;s keep learning as we go. And if we hit some bumps along the way, good. That&#8217;s how we know we&#8217;re doing something that&#8217;s worth it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_nV0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_nV0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_nV0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2435794,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/169973517?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_nV0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!_nV0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff616b28-0bf9-4231-8a3a-56adc3d19461_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Product Maturity: What It Really Means</strong></p><p>One of the most important things we&#8217;ve learned in our AI product work is this: just because an AI product ambition is approved, it doesn&#8217;t mean we don&#8217;t need you anymore. And just because something is live doesn&#8217;t mean it&#8217;s done. If we don&#8217;t set the right expectations around what it means to grow an AI product - not just build one - we end up with solutions that look complete on the surface, but break down when used in practice, or stall the moment new requests emerge.</p><p>Product maturity doesn&#8217;t mean feature-rich. It means: the solution is proven, adopted, stable, and trusted by the business and users. It means we&#8217;ve not only built the solution but also put the processes in place to operate it, evolve it, and support it responsibly.</p><p>That kind of maturity only comes with time, usage, learning, and adjustment. Not just from the AI team, but from everyone involved.</p><div><hr></div><p><strong>Why New Requests Aren&#8217;t Free</strong></p><p>It&#8217;s completely natural that once something useful is live, more ideas emerge. And in most cases, we welcome those. But it&#8217;s important to be clear: every new request is a new mini-product. Even if it seems like &#8220;<em>just another button</em>&#8221; or &#8220;<em>can we add one more user group,</em>&#8221; each of those changes needs discovery, prioritization, testing, and enablement. And often, they carry downstream implications for governance, user support, and compliance.</p><p>We apply the same mindset to new requests that we used at the beginning:</p><ul><li><p>What&#8217;s the problem we&#8217;re solving?</p></li><li><p>What user behavior do we expect to change?</p></li><li><p>What is the estimated value?</p></li><li><p>What is the cost of ownership?</p></li></ul><p>If we don&#8217;t ask those again, we risk building features that slow us down more than they help. Or worse - that break something we&#8217;ve already made work.</p><div><hr></div><p><strong>The Total Cost of Ownership (TCO)</strong></p><p>There&#8217;s one more thing we&#8217;d like to be honest about. Building an AI product can be relatively fast. Owning one is where the real cost shows up.</p><p>Many internal AI projects seem cheap in the beginning. A couple of developers, a use case, maybe a pre-trained model. But the true costs arrive later:</p><ul><li><p>Integrating it into real workflows</p></li><li><p>Handling errors and exceptions</p></li><li><p>Training and re-training users</p></li><li><p>Monitoring for AI quality</p></li><li><p>Meeting security and compliance standards</p></li><li><p>Supporting expansion to more teams</p></li></ul><p>That&#8217;s why we assess so much before building. We&#8217;re not trying to block innovation. We&#8217;re trying to protect our focus and prioritize what&#8217;s worth owning.</p><div><hr></div><h4><strong>Roles Across Phases: Who&#8217;s Involved and How</strong></h4><p>To make this journey work, we can&#8217;t walk alone. Here&#8217;s a rough sketch of how collaboration shifts across phases - and what&#8217;s typically needed from you as our business partners and domain experts along the way:</p><p><strong>1. Problem Discovery</strong></p><ul><li><p><strong>Our role</strong>: Facilitate the discovery process, frame the context, and start mapping how value could be created.</p></li><li><p><strong>Your role</strong>: Clarify the problem, walk us through the current process, and help us understand where the real pain points are.</p></li></ul><p><strong>Your time investment</strong>: Around 3&#8211;4 hours spread over one week.</p><div><hr></div><p><strong>2. Quick Assessment</strong></p><ul><li><p><strong>Our role</strong>: Evaluate the technical feasibility and early fit for AI, based on data, problem framing, and business priority.</p></li><li><p><strong>Your role</strong>: Share your expectations, validate that the problem is worth solving, and connect us to the right domain or technical experts.</p></li></ul><p><strong>Your time investment</strong>: 2&#8211;3 hours, focused and compact.</p><div><hr></div><p><strong>3. Detailed Assessment</strong></p><ul><li><p><strong>Our role</strong>: Architect the solution approach, run early prototypes, and estimate effort, value, and risks.</p></li><li><p><strong>Your role</strong>: Provide access to relevant data, co-shape the use case logic, and help us define what success looks like.</p></li></ul><p><strong>Your time investment</strong>: 6&#8211;10 hours, typically spread across two weeks.</p><div><hr></div><p><strong>4. 0&#8594;1 AI Product Development</strong></p><ul><li><p><strong>Our role</strong>: Build the product, test and fix iteratively, and refine based on feedback.</p></li><li><p><strong>Your role</strong>: Be part of ongoing validation, join feedback rounds, test early versions, and help with first-user onboarding.</p></li></ul><p><strong>Your time investment</strong>: Around 2 hours per week, usually for 6&#8211;8 weeks.</p><div><hr></div><p><strong>5. Go-Live and After</strong></p><ul><li><p><strong>Our role</strong>: Ensure the product is stable, monitor its performance, and maintain functionality.</p></li><li><p><strong>Your role</strong>: Support adoption by onboarding users, helping interpret early feedback, and guiding change in your team or department.</p></li></ul><p><strong>Your time investment</strong>: About 1 hour per week, depending on the scale of rollout.</p><div><hr></div><p><strong>6. New Requests</strong></p><ul><li><p><strong>Our role</strong>: Assess technical feasibility, estimate effort, and evaluate alignment with the product roadmap and total cost of ownership.</p></li><li><p><strong>Your role</strong>: Support in reframing the need, explain the user or business problem, help validate the potential impact, and be ready to prioritize.</p></li></ul><p><strong>Your time investment</strong>: Case by case - usually at least 2&#8211;3 hours per new request.</p><div><hr></div><h4><strong>An Invitation</strong></h4><p>AI product development isn&#8217;t just about building smart solutions. It&#8217;s about building smart relationships - with each other, with the problem, and with the organization we&#8217;re trying to serve. That takes effort. It takes structure. And above all, it takes clarity. We&#8217;ve seen what happens when clarity is missing. And we&#8217;ve also seen the power of a shared journey, when roles are defined, expectations are aligned, and everyone involved understands what it really takes to bring something new into the world and make it stick.</p><p>If you&#8217;re ready for that kind of partnership, not just for the launch, but for the lifecycle, then we&#8217;re ready to build with you.</p><p>JBK &#128330;&#65039;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#52 - One KPI to Rule Them All - Part 2/3]]></title><description><![CDATA[How to Measure Adoption in Internal AI Product Work]]></description><link>https://www.jaserbk.com/p/52-one-kpi-to-rule-them-all-part</link><guid isPermaLink="false">https://www.jaserbk.com/p/52-one-kpi-to-rule-them-all-part</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 27 Jul 2025 12:01:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0YkB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong> </p><p>In <a href="https://www.jaserbk.com/p/51-one-kpi-to-rule-them-all-part?r=2swyhe&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Part 1</a>, I made a claim: </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="pullquote"><p>Adoption is the one KPI that rules all others. </p></div><p>Not because it replaces business metrics, but because it is the only signal available early enough to matter. It tells you whether what you&#8217;ve built is being used. Because without usage, every other success metric stays out of reach.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;52fde6ae-ba75-4d3f-b792-39ce627ea1f1&quot;,&quot;caption&quot;:&quot;#beyondAI&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#51 - One KPI To Rule Them All (Part 1/2)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-07-20T15:41:42.472Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!BL5O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/51-one-kpi-to-rule-them-all-part&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:168765029,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!A2W_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>This belief didn&#8217;t arrive all at once. It came from years of building internal AI products that technically worked, but quietly failed. And still, they were celebrated.</p><p>That never felt right.</p><p>How can we, as a team, sometimes as a department, celebrate something without knowing if it created real value for the people who were supposed to use it?</p><p>Was I wrong? Or is this simply how big companies define success?</p><p>Back when I owned a startup and worked in others, we never celebrated a launch just for launching. We celebrated because now we could finally test whether we had built something real. Something people wanted. Something people would pay for. A launch was never the end. It was the moment the real work began: </p><ul><li><p>finding product-market fit, </p></li><li><p>solving real user problems, </p></li><li><p>and learning if our solution deserves to grow.</p></li></ul><p>But when I entered the world of large enterprises, I noticed something different.</p><p>Delivery became the final goal.</p><p>&#8220;<em>Brilliant, team. Let&#8217;s move on to the next project</em>.&#8221;</p><p>And honestly, I became allergic to that mindset.</p><p>Because these weren&#8217;t real products. They were internal service deliveries. We built what someone up the chain had asked for, without taking responsibility for whether it actually worked. Without knowing whether it ever made a difference.</p><p>For the last ten years, I&#8217;ve been on a mission to shift that.</p><p>To establish more solid product thinking in internal AI work. To treat these solutions not as technical outputs, but as real products. Not as projects, but as investments in value creation.</p><p>And that shift always brings me back to the same question:</p><p><strong>How can we truly know that what we&#8217;ve built is valuable?</strong></p><p>How can we show, without guesswork or slide decks, that something meaningful is happening?</p><p>For me, the answer begins with user adoption.</p><p>Because in internal AI, we don&#8217;t have all the signals that external products offer. There&#8217;s no revenue line. No churn metric. No market traction to point to. We can&#8217;t always follow the money, but we can follow the behavior.</p><p>And over the years, I&#8217;ve learned this: <br><em>Adoption is the only KPI worth tracking first</em>. <em>It&#8217;s the beginning of everything else.</em></p><p>So in this second part of the series, let&#8217;s go deeper. Let&#8217;s unpack what adoption really means. Let&#8217;s look at how to measure it and let&#8217;s explore how it becomes your most practical tool, not just to prove value, but to improve your product, align your stakeholders, and earn the credibility to keep building.</p><p>Because if adoption is the signal that rules them all, we owe it to ourselves to treat it like one.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0YkB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0YkB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0YkB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1922437,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/169307849?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0YkB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!0YkB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1a1acdf-cd7c-4973-ab9c-26b1609014e1_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>What Adoption Means for Internal AI Products</h2><p>Adoption is a word we all use. But most of us mean different things when we say it. Sometimes we mean access. Sometimes usage. Sometimes trust. Sometimes, just awareness. In some dashboards, adoption is binary: &#8220;<em>Is it being used or not?</em>&#8221; In some project meetings, it&#8217;s reduced to a launch checklist: &#8220;<em>Did we roll it out to users?</em>&#8221; And sometimes, worst of all, it&#8217;s assumed to be automatic. As if deployment equals adoption.</p><p>It doesn&#8217;t.</p><p>In internal AI product work, adoption is rarely immediate. And it&#8217;s never guaranteed. Not even when the model is strong. Not even when the integration is clean. Not even when the business case is sound. Because the truth is this:</p><div class="pullquote"><p>Adoption is not a feature of the product. It&#8217;s a behavior.</p></div><p>It lives outside the product team. It lives in the people you&#8217;re trying to help. In how they choose to work, what they&#8217;re trying to get done, and what alternatives they already trust. That&#8217;s why we need to be much more precise about what we mean by it.</p><p>To me, internal AI product adoption means this:</p><blockquote><p>People are using your product voluntarily and consistently, for the purpose it was designed to support &#8212; and sometimes even beyond that.</p></blockquote><p>Let&#8217;s unpack that. And begin with the most important one.</p><p><strong>Voluntarily</strong> means it wasn&#8217;t forced. People chose it because it made sense. Because it helped. Because the alternative was worse, or simply more work.</p><p>In an enterprise setup, it&#8217;s common that an internal tool is introduced and employees are required to use it. Think CRM systems, where there is no alternative. Usage is enforced. Compliance is easy to measure. Adoption, in this case, becomes inevitable.</p><p><em>But for AI products, this rarely holds true.</em></p><p>By nature, AI products are designed to improve workflows, not replace them. They assist, guide, recommend, or automate. But rarely do they become the only way to get a task done. The outcome still matters more than the method. As long as the deliverable arrives, no one questions how it was achieved.</p><p>That&#8217;s what makes internal AI Product Adoption harder to earn.</p><p><strong>Consistently</strong> means the behavior is stable. Not a spike. Not a one-time test. Not a pilot use case that disappears once the sponsor moves on.</p><p>Real adoption creates patterns. Patterns you can observe, learn from, and improve. And consistent usage is what matters most. Because at the end of the day, that is how you prove your product is trusted to do the job it was built for. Not once. But every time. It becomes the default. Not just an option.</p><p>So once you&#8217;ve built a product and taken real steps to drive adoption, reaching that point tells you something powerful. It means you&#8217;ve achieved alignment. Between what you built, what users actually need, and how they truly behave.</p><p>That&#8217;s why a technically &#8220;successful&#8221; product can still fail. Because </p><div class="pullquote"><p>AI product adoption doesn&#8217;t happen when you push something into a process. It happens when the process pulls it in.</p></div><p>It&#8217;s not a rollout. It&#8217;s a fit. And guess what, we have it, too. The famous product-market fit. Only here, it has a different name. <strong>Product-organization fit.</strong></p><p>The point where your AI product doesn&#8217;t just exist inside the company. It belongs.</p><p>And fit always emerges through comparison. Users try your product. They compare it, consciously or not, to whatever they were doing before.</p><p><em>Manual steps. Excel hacks. Asking colleagues. Ignoring the problem.</em></p><p>If your product helps more than it hurts, if it feels smoother, faster, safer, or more reliable, they come back. If it doesn&#8217;t, they don&#8217;t. Adoption happens when the alternative is worse.</p><p>And once that&#8217;s true, once you&#8217;ve reached the invisible tipping point where the user stops asking, &#8220;<em>Should I use this?</em>&#8221; and simply starts using it, that&#8217;s when the product becomes real.</p><p>Not because you said so. Because they did.</p><p></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!d56j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!d56j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!d56j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!d56j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!d56j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!d56j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:500748,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/169307849?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!d56j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!d56j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!d56j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!d56j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa487b2aa-6bfd-4b4e-95b2-6387bbe3724b_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2>How to Track AI Adoption in a Company</h2><p>While <strong>adoption rate</strong> is the KPI we care about most, it doesn&#8217;t appear on its own. It has to be constructed. Built from the bottom up. It&#8217;s a composite, derived from smaller, observable metrics that together signal whether adoption is really happening.</p><p>It only makes sense when you know who the product is for and whether the people you had in mind are actually using it. In other words: before you can calculate adoption, you need two things.<br><br>(1) A clear picture of your <strong>initial user base</strong>.<br>(2) A way to measure <strong>voluntary and consistent usage</strong> across that base.</p><p>Without these two, you can&#8217;t derive a meaningful adoption rate.</p><div><hr></div><h3>01 The Initial User Base</h3><blockquote><p>Know the People You Choose First</p></blockquote><p><em>Adoption rate</em> already tells you that you need a denominator.<br>And here&#8217;s where most teams accidentally sabotage their own success.</p><p>In internal AI product development, you almost always start with a specific user group. Either they approached you and asked for AI help, or you pitched an idea to them and got them onboard. This team becomes your initial user base. It might be sales, ops, or marketing from a particular department. But it could just as easily be a niche team from anywhere in the organization.</p><p>Your solution is meant to support their daily work. That&#8217;s the scope. And that&#8217;s the user base you should measure adoption against, at first.</p><p>It&#8217;s not only important to define <em>who</em> your initial users are, but also <em>how many</em> there are.</p><p><strong>For example </strong>the company may have 100 salespeople in total.<br>But if your product is designed for, and piloted with, just 20 in the B2B segment, <em>stick with 20</em> for now. Don&#8217;t get distracted by the bigger number.</p><p>The reason is twofold:</p><ol><li><p><strong>You can&#8217;t assume fit across the board.</strong><br>Just because a product works for 20 users doesn&#8217;t mean it fits the workflows, needs, or systems of the other 80. Different regions, different setups, different behaviors. It&#8217;s a false assumption.</p></li><li><p><strong>You&#8217;ll dilute your metric.</strong><br>If you count all 100 salespeople in your denominator while only 20 were ever involved, your adoption rate looks artificially low and you&#8217;ll find yourself explaining numbers that don&#8217;t reflect reality.</p></li></ol><p>Let&#8217;s make this concrete.</p><p>You build an AI tool for the 20 people in your B2B sales team. After rollout, 18 of them use it consistently. That&#8217;s a 90% adoption rate. Strong signal. Great outcome.</p><p>But a few weeks later, someone tells you there&#8217;s another sales team with 80 people. You add them to your target base, even though they haven&#8217;t seen the tool yet. Now, your adoption rate drops to 18%.</p><p>Technically correct.<br>But totally misleading.</p><p>That&#8217;s why you need to define and track user groups separately. Only expand your base when it makes sense to. Otherwise, you confuse the narrative and weaken the story your metrics are telling.</p><div><hr></div><h3>02 Voluntary Usage</h3><blockquote><p>When People Choose Your Product</p></blockquote><p>Voluntary means people are choosing your product without being forced to.</p><p>And in internal AI scenarios, that&#8217;s almost always the case. Unlike CRM systems or time-tracking tools, no one mandates the use of your AI assistant. People already have their own workarounds. Their spreadsheets. Their templates. Their muscle memory.</p><p>So if they do use your product, it&#8217;s a signal. It means:</p><p>&#9989; Your product makes their job easier.<br>&#9989; Your product is a better alternative.<br>&#9989; They&#8217;ve chosen it over what they had before.</p><p>That&#8217;s worth tracking.</p><p>Here&#8217;s one of the strongest indicators:<br><strong>New users over time, beyond your original scope.</strong></p><p>As described above, most internal AI products begin with a tightly scoped user group, often a single team who co-develop or pilot the idea. That&#8217;s your baseline.</p><p>But if the product has real value, you&#8217;ll begin to see interest from outside that circle.<br>Someone asks if they can try it. Someone else shows up in the logs. Suddenly, new names appear, without formal rollout.</p><p>That&#8217;s not just usage. That&#8217;s <em>organic proof of value.</em></p><p>So here are two simple, telling metrics to track:</p><ul><li><p><strong>Unique users / usages per month</strong></p></li><li><p><strong>Unique users / usages per department per month</strong></p></li></ul><p>You don&#8217;t need perfection here. But if those numbers grow, especially from teams not part of your original user group, it&#8217;s a sign that voluntary adoption is unfolding.</p><p>And that matters. </p><div class="pullquote"><p>Nobody voluntarily uses a product that makes their job harder.</p></div><h3>03 Consistent Usage</h3><blockquote><p>When Your Product Becomes a Habit</p></blockquote><p>Now let&#8217;s move to the deeper layer. If someone uses your product once, it hasn&#8217;t been adopted. Even if the demo was a hit and the pilot went well. Real adoption happens when your product becomes <strong>part of how work gets done</strong>.</p><p>That means behavior that&#8217;s repeatable. Predictable. Consistent.</p><p>But consistency is always <em>contextual. </em>Some products support daily tasks. Others support monthly processes. One AI product might be used 40 times a week. Another, once a quarter. Both can be adopted, if they match the rhythm of the task they support.</p><p>So before tracking anything, ask yourself:</p><ul><li><p><em>What&#8217;s the expected usage frequency for this product?</em></p></li><li><p><em>Does the current behavior align with that expectation?</em></p></li><li><p><em>Are users coming back or was it a one-time test?</em></p></li></ul><p>Once you&#8217;ve clarified that, these are the metrics worth watching:</p><ul><li><p><strong>Usage frequency per user</strong></p></li><li><p><strong>Repeat usage across relevant timeframes (weekly, monthly, quarterly)</strong></p></li></ul><p>These show whether your product is truly becoming embedded in how people work or if it&#8217;s just another tool they tried once and left behind. </p><p>These basic signals are already enough to give you a sense of adoption rate. If people are coming back, using it again and again, you&#8217;ve done something right.</p><div><hr></div><h3>04 Calculating Adoption Rate</h3><p>Let&#8217;s try to bring this all together with a working example. We&#8217;ll use what we&#8217;ve learned to create a first draft of an <strong>adoption rate formula</strong>, based on an potentially real internal AI product: The <strong>Tender Assistant</strong>. Let&#8217;s see what adoption really looks like, when it&#8217;s measured with care.</p><p>This product is designed to support teams working on public tenders, automating data collection, identifying inconsistencies in documents, and helping prepare responses more efficiently. It wasn&#8217;t meant to replace a mandated process and not to be rolled out through a mandatory toolchain. It was just an enhancement, a GenAI use case meant to improve a high-effort workflow.</p><p>So, how do you track adoption here?</p><div><hr></div><h4>Step 1: Define the Initial User Base</h4><p>We start with one department: the Strategic B2B Sales team, responsible for responding to complex tenders in a specific region.</p><p>There were 12 active users involved in the process, bid managers, solution architects, and legal reviewers. That became our first user group.</p><p>We won&#8217;t include other sales regions yet and not count adjacent teams. We focus on the 12. This gives us a clear denominator for early adoption tracking.</p><div><hr></div><h4>Step 2: Track Voluntary Usage</h4><p>As mentioned before, no one is forced to use an internal AI product. That&#8217;s the nature of this work. We introduce the product. We support the onboarding. We train the core users. And then, we wait. But we don&#8217;t disappear.</p><p>We stay available and close. We answer questions, fix what&#8217;s broken, and listen to what&#8217;s missing. Because adoption doesn&#8217;t happen in a moment. It unfolds over time. </p><p>In the first few weeks, we start to see the signals.</p><ul><li><p>11 out of 12 users accessed the Tender Assistant at least once.</p></li><li><p>9 out of 12 returned to it more than once.</p></li><li><p>6 out of 12 used it across at least three separate tender processes.</p></li></ul><p>That already tells us something: people are choosing the product, not just trying it, but using it when it counts.</p><p>But then came the stronger sign: <br>In the second month, the Large Deals unit - a completely separate team - reached out and asked for access. They had heard about the tool and wanted to see if it could help them, too. We hadn&#8217;t pitched it. They had asked for it. That&#8217;s not just growth. That&#8217;s <strong>pull</strong>.</p><p>That&#8217;s adoption trying to spread. <br>Still, we treat them differently and carefully.</p><p>We don&#8217;t add them to the original adoption rate formula just yet. We start a new onboarding journey, map their specific workflows, and learn what overlaps and what doesn&#8217;t. Some of the features work. Some don&#8217;t. Their usage is still low, but it&#8217;s early and more importantly, we&#8217;re tracking them in a separate bucket.</p><div><hr></div><h4>Step 3: Track Consistent Usage</h4><p>We look at usage frequency aligned to the actual rhythm of tenders. This team responded to 2&#8211;3 tenders per month, typically with a 1-2 week turnaround. So we expected the tool to be used at least once per tender.</p><p>What we saw:</p><ul><li><p>7 out of 12 users used the tool during all tenders in the second month.</p></li><li><p>The average usage per user per month rose from 1.3 to 2.7.</p></li><li><p>Usage logs showed task-specific prompts being reused - a good sign of habit formation.</p></li></ul><p>This confirmed that it wasn&#8217;t just being tested. It was becoming part of the process.</p><div><hr></div><h4>Step 4: Build the Adoption Rate Formula</h4><p>We kept it simple to start:</p><p><strong>Adoption Rate = </strong>(Number of consistent, voluntary users in initial group)<strong> / </strong>(Total number of users in initial group)</p><p>In this case:</p><ul><li><p>7 users showed consistent usage (aligned with tender frequency)</p></li><li><p>12 users in initial group</p></li></ul><p>&#8594; <strong>Adoption Rate = 58%</strong></p><p>But that number doesn&#8217;t stand alone. We tracked growth as well.</p><p>After onboarding the Large Deals team (10 new users), we tracked their behavior separately for a month before merging them into the broader adoption metric. This kept our insights clean and our stories honest.</p><p></p><div><hr></div><p>So far, we&#8217;ve talked about how to measure adoption rate. And in this case, we focused on signals from a GenAI use case - the Tender Assistant.</p><p>These metrics are foundational. But depending on the nature of your AI product, they might not be enough. Not every AI solution looks the same. Some are workflow assistants. Others are quiet automation. Some surface insights. Others shape decisions.</p><p>And depending on how your product is used - and by whom - different signals will matter. What counts as "real usage" for one AI product might be irrelevant for another.</p><p>That&#8217;s why we need to expand our view.</p><p>And yes - even though I said this would just be a two-part series - there&#8217;s more to unpack.</p><p>In the next article(s), we&#8217;ll explore:</p><ul><li><p><strong>Different AI Products, Different Signals</strong></p></li><li><p><strong>How to Use Adoption to Drive Product Decisions</strong></p></li><li><p><strong>Using Adoption in Stakeholder Conversations</strong></p></li><li><p><strong>Adoption and Long-Term Funding</strong></p></li></ul><p>Hope to see you then.</p><p>JBK &#128330;&#65039;</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#51 - One KPI To Rule Them All - Part 1/3]]></title><description><![CDATA[The One Metric That Unlocks Every Other Sign of AI Value]]></description><link>https://www.jaserbk.com/p/51-one-kpi-to-rule-them-all-part</link><guid isPermaLink="false">https://www.jaserbk.com/p/51-one-kpi-to-rule-them-all-part</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 20 Jul 2025 15:41:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BL5O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI </strong></p><p>If you build internal AI products, you&#8217;ve probably felt it: the disconnect between all the work you&#8217;re doing and the recognition you&#8217;re not getting. The pitch goes well. The prototype works. The model scores look great. But months later, you&#8217;re still struggling to answer one quiet, persistent question:</p><p><em>&#8220;Did it really change anything?&#8221;</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>This series is for those of us who carry that question around. Not just as a reporting requirement, but as a personal weight. Because building AI is no longer the hard part.</p><p>Proving it mattered is where most teams falter. And what makes it harder is that the outside world doesn&#8217;t see the difference. External AI teams get clean metrics: <em>revenue uplift, conversion rates, churn rate</em>. But inside the enterprise, success hides behind foggy processes, political handoffs, and silence. Sometimes your solution is used, but invisible. Sometimes it&#8217;s unused, but still alive in infrastructure. And sometimes it&#8217;s brilliant on paper, but completely irrelevant to how work actually happens.</p><p>In this series, I&#8217;ll explore why internal teams need a different approach to measuring value. Because in these enterprise environments, what looks like success can quickly become a mirage.</p><p>And unless we learn to measure what matters &#8212; early, honestly, and with the business in mind &#8212; AI will continue to be a story of missed potential.</p><p>Let&#8217;s change that.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BL5O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BL5O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BL5O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2712101,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/168765029?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BL5O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!BL5O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e23dd4b-28c8-470b-ab12-18ebe3e8f7f4_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Value Mirage of Internal AI</strong></h2><blockquote><p><em>Why success is harder to prove than most teams expect</em></p></blockquote><p>Internal AI teams are growing. Their ambition is high. Their technical capabilities are maturing. Models are getting stronger. Prototypes come together faster. And the tooling, thanks to foundation models and better infrastructure, is finally catching up to the promise of enterprise AI.</p><p>But despite all this visible progress, one question still lingers at the core of almost every internal AI product conversation:</p><p><em>&#8220;How do we know this actually delivered business value?&#8221;</em></p><p>And often, there&#8217;s no clear answer. Not because people aren&#8217;t trying, but because in the world of internal AI, proving value is uniquely difficult. Much harder than it seems from the outside. And often far harder than expected by stakeholders who equate technical performance with business success.</p><p>It&#8217;s not that these products fail outright. In fact, that&#8217;s what makes this so tricky. They work. They run. They score. They process. They sometimes even deploy. But something&#8217;s missing. Something that&#8217;s felt in the silence that follows a well-rehearsed demo, or in the polite nods after a slide titled &#8220;accuracy &gt; 90%.&#8221;</p><p>That silence is the absence of connection between what was built and what the business feels. It&#8217;s the moment when your audience is no longer listening to how it works. They&#8217;re wondering why it matters.</p><p>At the beginning of any internal AI journey, the terrain is foggy. We usually start with a hunch, a pain point, or a well-meant ambition like &#8220;<em>let&#8217;s automate this process</em>&#8221;. But when it comes to identifying the right KPI &#8212; the one that will later prove the solution&#8217;s worth &#8212; there&#8217;s rarely a straight answer.</p><p>And to be honest, that&#8217;s normal. </p><p>You don&#8217;t always know upfront what to measure. You don&#8217;t always know how value will show up. Maybe it&#8217;s cost avoidance. Maybe it&#8217;s faster cycle time. Maybe it&#8217;s fewer escalations, better targeting, or less manual rework. But these outcomes often sit far downstream from the product itself. They depend on behavior change, process integration, and adoption across multiple teams. And that makes them slow, indirect, and politically fragmented.</p><p>So the question of &#8220;<em>what should we measure?</em>&#8221; isn&#8217;t just hard. It can feel paralyzing.</p><p>And yet, most of us are still asked to provide a business case before we build. To define impact. To attach a number. We try our best. We make projections. We pick a few KPIs that might signal value later. But the truth is: we don&#8217;t know yet. We can&#8217;t know yet.</p><p>This is what makes internal AI different. In external products, success leaves a trail. Users pay, or they churn. Growth can be tracked. Revenue is visible. There are signals that tell you whether your product has found a market or not.</p><p>But inside the enterprise that trail disappears.</p><p>There is no revenue line for your AI product. No invoice to point to. No CAC or LTV. Just internal teams who may or may not use what you&#8217;ve built, and may or may not tell you when they stop. Your AI might be deployed, but bypassed. Integrated, but distrusted. Impressive, but irrelevant.</p><p>You rarely know the moment your product fails. Because internal failure is quiet. It doesn&#8217;t come with angry customers or public reviews. It comes with workarounds. With shadow spreadsheets. With tools that are &#8220;live&#8221; in infrastructure but dead in behavior. It comes with silence.</p><p>And in that silence, a deeper risk starts to grow: <strong>credibility loss</strong>.</p><p>Because the business remembers the investment. They remember the pitch. They remember the promise. But if no one can confidently say what changed, the narrative starts to shift. &#8220;AI&#8221; becomes something that consumes resources, not something that creates value. Sponsors lose interest. Teams lose momentum. And the next idea becomes harder to fund. Not because it&#8217;s worse, but because trust has eroded.</p><p>This is the real danger. Not technical failure, but the slow erosion of confidence in internal AI as a whole.</p><p>We&#8217;ve all seen it. Maybe we&#8217;ve even built it. That internal AI product ambition that did everything right on the surface &#8212; accurate predictions, beautiful dashboards, flawless deployment &#8212; but ultimately delivered nothing of value. Not because the product was broken. But because no one used it. Or because the way it was used never translated into measurable change. Or because the people it was supposed to help never changed how they worked.</p><p>These aren&#8217;t edge cases. They&#8217;re common. And they&#8217;re exhausting. Because the effort is real. The intentions are good. But the outcome? The outcome vanishes. And it leaves behind a question that haunts many internal AI teams more than they&#8217;d like to admit:</p><p>&#8220;<em>Did we build something valuable or just something impressive?</em>&#8221;</p><p>But i think we can overcome that struggle with just one KPI. At least at the beginning. I&#8217;d like to describe my thoughts with a Tolkien&#8217;s Lord of the Rings analogy:</p><div class="pullquote"><p>One KPI to <strong>rule</strong> them all,  </p><p>One KPI to <strong>find</strong> them,</p><p>One KPI to <strong>bring</strong> them all,</p><p>and in the outcome <strong>bind</strong> them.</p></div><p>Let me explain why.</p><p></p><h2><strong>One KPI to Rule Them All</strong></h2><blockquote><p><em>The insight that reshaped how I measure internal AI success</em></p></blockquote><p>For a long time, I searched for the one perfect metric. The one that would finally tie our internal AI products to real business value.</p><p>But the difficulty was this: internal AI products touch multiple business processes and workflows. Some of those generate revenue. Others protect it. Some AI products are meant to avoid costs by preventing additional headcount. Others reduce costs by cutting down on external spend, like consultant fees. And some act as enablers - helping the business grow revenue, indirectly. In some cases, all three apply.</p><p>So I tried cost savings metrics. I tried process efficiency metrics. I tried risk reduction and effort elimination. I even tried composite scorecards with weighted proxies.</p><p>And sometimes, those metrics helped tell the story.</p><p>But only on slides, in steering committees, or in follow-up emails.</p><p>Too often, they were fragile signals. Too slow to emerge. Too easy to challenge. Too detached from real behavior.</p><p>That&#8217;s when I realized something both practical and deeply grounding:</p><div class="pullquote"><p>Before you can prove value for the business, you must first prove value for the end users.</p><p>And the best signal is usage.</p></div><p>Used voluntarily. Used consistently. Used by the right people, in the right moments. Because in internal AI work, nothing else matters without adoption.</p><p>That was the shift for me. That was the moment the fog began to lift.</p><p>So when I say adoption is &#8220;<strong>One KPI to rule them all</strong>&#8221; what I mean is this:</p><p>Adoption is the only early signal that has the power to connect &#8212; or rule over &#8212; every other KPI you&#8217;ll eventually need.</p><ul><li><p>Model performance metrics like precision, recall, and latency</p></li><li><p>Business metrics like time saved or compliance risk reduced</p></li><li><p>Outcome metrics like reduced handling time, increased conversion, or fewer escalations</p></li></ul><p>They &#8212; the KPIs you&#8217;ll later be asked to report on &#8212; are all governed by adoption.</p><p>If no one uses your AI product, none of those metrics matter. They become disconnected ideas in your head, or worse, misleading decorations on a dashboard.</p><p>So yes, them refers to the entire family of <em>downstream</em> KPIs.</p><p>And adoption is what makes them possible. It gives them form. It gives them context.</p><p>It&#8217;s the keystone.</p><p>But adoption isn&#8217;t just a positive signal. It&#8217;s not just the green light that tells you your product is alive. Adoption is also your first diagnostic tool.</p><p>It&#8217;s the earliest and most reliable sign that something might be wrong. If adoption drops. If it never starts. If it spikes in one team but not another.</p><p>You don&#8217;t need to speculate. You investigate.</p><p>And that investigation leads to real insights:</p><ul><li><p>Does the user journey feel intuitive?</p></li><li><p>Are people skipping steps or overriding AI decisions?</p></li><li><p>Is the integration too shallow, or too disruptive?</p></li><li><p>Are teams using it in unintended ways that reveal new value?</p></li><li><p>Or is the AI not accurate enough?</p></li></ul><p>In this way, adoption becomes both proof of momentum and a system of early warnings. It helps you see what&#8217;s working. And it helps you find what isn&#8217;t &#8212; before failure becomes political or irreversible.</p><p>I&#8217;ve seen technically strong AI products die quietly because nobody changed how they worked. And I&#8217;ve seen modest solutions take off because adoption came early and the team had the humility to listen, iterate, and respond.</p><p>That&#8217;s when I began to treat adoption rate not as an afterthought, but as the one KPI that governs all the others. It&#8217;s the only one that&#8217;s visible from the beginning.</p><p>And it&#8217;s the only one that unlocks the rest.</p><p><strong>No, adoption won&#8217;t tell you everything</strong>. It won&#8217;t quantify ROI or certify value to the CFO. But it will do something even more essential. It will show you if what you&#8217;ve built is real enough to be used &#8212; and alive enough to be improved.</p><p>That&#8217;s why, in internal AI product work, adoption is not just a signal. <em>It&#8217;s the signal that rules them all</em>.</p><p></p><h2><strong>One KPI to Find Them</strong></h2><blockquote><p><em>Why adoption reveals the business value you were looking for all along</em></p></blockquote><p>When we begin building an internal AI product, we always start with purpose. We don&#8217;t build blindly. We listen to the pain points. We work with users. We try to understand the operational logic behind the problem. And at the same time, we aim to connect that user problem to something bigger &#8212; to <strong>business outcomes</strong>.</p><ul><li><p>Will this solution help us reduce cost?</p></li><li><p>Will it eliminate waste or manual rework?</p></li><li><p>Could it avoid future risks?</p></li><li><p>Or will it enable revenue, directly or indirectly, by improving decisions, speed, or scale?</p></li></ul><p>We don&#8217;t ignore these questions. In fact, we often write them into the product brief or the business case.</p><p>But let&#8217;s be honest &#8212; defining the exact metrics and logic to measure them, especially upfront, is hard. Sometimes frustratingly so.</p><p>Because even when we&#8217;re clear on what the AI solution should help achieve, we&#8217;re rarely clear on how that impact will show up in the data.</p><p>And even if we define a KPI with a stakeholder early on, it often turns out to be:</p><ul><li><p>Too far downstream</p></li><li><p>Owned by another team</p></li><li><p>Mixed with dozens of other influencing factors</p></li><li><p>Or worse, tracked in a report that nobody actually trusts</p></li></ul><p>So we make our best guess. We write down the metrics we think will prove value later.</p><p>But more often than we&#8217;d like to admit, those guesses don&#8217;t hold. And we realize, three months in, that the KPI we picked either can&#8217;t be measured cleanly or doesn&#8217;t reflect the real outcome we&#8217;re driving.</p><p>This is where adoption changes everything.</p><p>Because once people start using your AI product &#8212; really using it, in live processes, under real conditions &#8212; suddenly the fog begins to lift.</p><p>You see how the product is being used. You learn which teams engage with it naturally and which ones don&#8217;t. You observe where trust builds and where friction still exists.</p><p>And most importantly, you start to see which outcomes are actually being influenced &#8212; and how.</p><p>The business KPIs you couldn&#8217;t quite measure before, now they start surfacing. Still not through your own analysis, but through the people using the product every day.</p><ul><li><p>A sales team might say, &#8220;<em>We&#8217;re closing leads faster now.</em>&#8221;</p></li><li><p>A customer service leader might report, &#8220;<em>Escalation rates have dropped.</em>&#8221;</p></li><li><p>A compliance officer might notice, &#8220;<em>We&#8217;re catching issues earlier.</em>&#8221;</p></li></ul><p>These signals don&#8217;t come from the AI team. They come from the business. And that makes them powerful.</p><p>So when I say &#8220;<strong>One KPI to find them</strong>&#8221; I mean this:</p><p>Adoption helps you find the true business KPIs &#8212; them &#8212; that are affected by your product.</p><p>Not in theory and projection. But through lived usage.</p><p>Because once a product is adopted:</p><ul><li><p>It becomes observable</p></li><li><p>It enters real workflows</p></li><li><p>It creates data you didn&#8217;t have before</p></li><li><p>It starts conversations you couldn&#8217;t have earlier</p></li></ul><p>And that&#8217;s when the right metrics begin to emerge. Not as guesses, but as patterns. Not as assumptions, but as evidence.</p><p>It&#8217;s humbling, really.</p><p>Because it reminds us that no matter how sharp our thinking, we don&#8217;t control all the value. Some of it is outside our reach.</p><ul><li><p>It&#8217;s embedded in business metrics we don&#8217;t own.</p></li><li><p>It&#8217;s locked in processes we can&#8217;t fully observe.</p></li><li><p>It&#8217;s shaped by stakeholder behaviors we don&#8217;t manage.</p></li></ul><p>But once our product is being used those metrics start to show up.</p><p>The business begins to bring them forward. Suddenly, people don&#8217;t just ask you what the product is doing. They start telling you what it&#8217;s changing. That&#8217;s when you know your product is real. </p><p>So yes, adoption doesn&#8217;t just rule the other KPIs. It helps you find them. It gives you access to the only thing that ever reveals business value for internal AI: <strong>the lived behavior of people who trust what you&#8217;ve built.</strong></p><p></p><h2><strong>One KPI to Bring Them All</strong></h2><blockquote><p><em>How adoption becomes the gravitational pull that turns an AI product into a business asset</em></p></blockquote><p>There&#8217;s a moment in the life of a successful internal AI product when you start to notice a shift. It no longer feels like you&#8217;re pushing. It no longer feels like you&#8217;re convincing people to try it, chasing usage reports, or writing follow-up messages just to keep the spark alive. Instead, things begin to pull.</p><ul><li><p>A new team reaches out.</p></li><li><p>A stakeholder from another business unit asks if they can join the next pilot.</p></li><li><p>Someone you&#8217;ve never met references your product in a planning meeting.</p></li></ul><p>What started as a focused product now has momentum. And that momentum did not come from technical excellence alone. It came from adoption through relevance.</p><p>So when I say &#8220;<strong>One KPI to bring them all</strong>&#8221; I still mean the business KPIs we&#8217;ve been trying to measure from the start.</p><ul><li><p>Cost reduction.</p></li><li><p>Cost avoidance.</p></li><li><p>Direct or indirect revenue enablement.</p></li></ul><p>Them &#8212; the KPIs we struggled to define at the beginning &#8212; finally start showing signs of life once the product is used.</p><p>But something else happens, too. Adoption does not just bring the metrics into motion. It brings in the people behind the metrics. The business owners, the adjacent teams, the leaders and enablers who start building on top of what you&#8217;ve created.</p><p>It becomes something people talk about. Something that touches other systems, other teams, other goals. Adoption pulls it all together. <strong>It is the gravitational force that begins to draw the organization in.</strong></p><p>And that pull is what activates your KPIs in a way that dashboards never could.</p><p>You start seeing shifts in:</p><ul><li><p>Cycle times</p></li><li><p>Manual workarounds</p></li><li><p>Time-to-resolution</p></li><li><p>Uplift in conversion or sales readiness</p></li><li><p>Effort allocation across roles</p></li><li><p>Forecast accuracy</p></li><li><p>Process completion rates</p></li></ul><p>But now, it is not you reporting these shifts. The business starts noticing them. </p><ul><li><p>Finance might begin modeling cost avoidance based on changes in headcount planning.</p></li><li><p>Operations might share how throughput has increased with no additional staff.</p></li><li><p>Risk or Legal might recognize that an early-warning AI system is now embedded in their compliance checks.</p></li></ul><p>These conversations do not happen when a product is in development. They do not even happen when it is just launched. They happen when adoption is real.</p><h3>Adoption brings something else that most metrics cannot. </h3><p>Alignment.</p><p>When the product is used, everyone around it starts working differently. Enablement makes sense. Feedback becomes targeted. Governance becomes active, not theoretical.</p><p>Executives stop asking why you built it &#8212; and start asking what more it could do.</p><p>Your AI product stops being an initiative. It starts becoming infrastructure.</p><p><strong>And here is the truth I&#8217;ve learned:</strong></p><p>All the metrics in the world are meaningless until people care. And people do not care until they use it. And once they use it &#8212; when it actually helps them &#8212; they start becoming part of the story. They bring others in. They speak for the product. They make your business case stronger than any model ever could. AND they make the outcomes transparent only they have control and access to.</p><p></p><h2><strong>And in the Outcome Bind Them</strong></h2><blockquote><p><em>How adoption transforms usage into proof, and AI products into trusted outcomes</em></p></blockquote><p>By the time adoption is established &#8212; when people are using your AI product not out of obligation or curiosity, but because it genuinely fits how they work &#8212; something important starts to shift. Not suddenly. Not dramatically. But steadily, and unmistakably.</p><p>The product begins to create more than activity. It begins to create results.</p><p>And yet, those results &#8212; the ones we aimed for in the beginning &#8212; are rarely immediate. They do not appear in clean, self-contained dashboards. They do not arrive in tidy before-and-after comparisons. Instead, they begin to surface in the rhythm of the business. In meetings. In feedback loops. In operational metrics that slowly start to move.</p><p>This is the moment where everything we hoped to measure &#8212; those elusive business KPIs we tried to define at the start &#8212; begin to settle into form. And they do not just emerge. They become bound to the product itself.</p><p>That is what I mean when I say: &#8220;<strong>And in the outcome bind them</strong>&#8221;. Until this point, many of those KPIs felt abstract.</p><ul><li><p>Cost avoidance.</p></li><li><p>Cycle time reduction.</p></li><li><p>Conversion uplift.</p></li><li><p>Revenue growth.</p></li></ul><p>We mentioned them in our business case. We tried to estimate them. But we also knew &#8212; quietly &#8212; that measuring them would be hard. That they lived downstream, in systems we did not control, owned by people we were not sure would pay attention.</p><p>But once adoption takes hold, once the product is used in daily work, something subtle but powerful changes. Those same people begin to see the impact for themselves. Not because we told them to, but because it shows up in their reality.</p><ul><li><p>A team lead starts saying, &#8220;<em>We do not have to double-check these entries anymore.</em>&#8221;</p></li><li><p>A controller notes, &#8220;<em>We are spending less time on reconciliations.</em>&#8221;</p></li><li><p>A compliance owner quietly shares, &#8220;<em>We have reduced our response time without adding staff.</em>&#8221;</p></li></ul><p>These are not claims. They are experiences. And in that moment, the KPIs we sought to measure begin to belong to them. That is the binding.</p><p><strong>Because adoption alone is not the outcome.</strong> Adoption is the start of a pattern &#8212; one that allows us to observe, learn, and begin making connections we could not make before.</p><p>Now, we are no longer speculating. We are seeing relationships between usage and impact. We are finding leading indicators. We are discovering how certain behaviors, when supported by the AI product, correlate with improved business performance.</p><blockquote><p>And crucially &#8212; we are no longer the only ones doing this work. The business starts participating. They bring their own data, their own stories, their own versions of the value narrative. And suddenly, we are not measuring in isolation anymore. We are measuring together. This is the moment internal AI products move from experimental to essential. <strong>Not because everything has been quantified. But because the product is no longer defended by the product team &#8212; it is spoken for by the business.</strong></p></blockquote><p>That is when outcomes become stable, when metrics become trusted, when sponsorship becomes continuous.</p><p>And it all begins because adoption has created enough usage to make impact visible. Enough trust to make collaboration possible. Enough real-world relevance to make measurement credible.</p><p><em>So yes, adoption rules the KPIs. It finds them. It brings them into motion. But most importantly, it binds them to outcomes &#8212; to the kinds of results that teams can feel, leaders can report, and businesses can build on.</em></p><p></p><h2><strong>Final Reflection</strong></h2><p>When I look back at the internal AI products that truly made a difference, it was never the technical elegance that convinced the business. Not the architecture. Not the pilot results. It was because we found ways to make the product useful. Genuinely useful. For real people, in real moments of their work.</p><p>That usefulness led to something rare: <strong>engagement</strong>.</p><p>Participation in user acceptance tests went up. Feedback became honest. And ultimately, adoption took hold.</p><p>Adoption is where it all starts. And this beginning is not easy.</p><p>But once you understand that you do not need to focus on any other metric first, it becomes much easier to allocate your precious resources to the right tasks. You stop chasing hypothetical KPIs and start building something real.</p><p>Because without adoption, all other KPIs remain out of reach. They stay theoretical and fragile. They stay disconnected from the truth of how people work.</p><div><hr></div><p>This article was all about that one insight: <em>Why adoption rate is the only metric that matters in the beginning. </em></p><p>In the second part of this series, I will go deeper into the how. How to define adoption. How to track it. And how to use it as a compass to guide product decisions, stakeholder alignment, and even long-term funding.</p><p>Hope to see you there.</p><p>JBK &#128330;&#65039;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#50 - The Illusion of AI Quick Wins (Part 2/2)]]></title><description><![CDATA[How to Avoid Building AI Products That Aren&#8217;t Worth Owning]]></description><link>https://www.jaserbk.com/p/50-the-illusion-of-ai-quick-wins</link><guid isPermaLink="false">https://www.jaserbk.com/p/50-the-illusion-of-ai-quick-wins</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 13 Jul 2025 16:16:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BKmF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong></p><blockquote><p>In my last article, I shared why so many AI quick wins turn into long-term burdens. Today, let&#8217;s talk about how to avoid that illusion &#8212; and how to ensure we build only what&#8217;s truly worth owning.</p><p>&#9989; Why Most Quick Wins Aren&#8217;t Wins at All</p><p>&#9989; How to Avoid the Illusion</p><p>&#9989; The Service Provider Perspective</p><p>&#9989; Closing Thoughts</p></blockquote><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0d17b650-d2f1-4d17-95e5-881977f4ee1c&quot;,&quot;caption&quot;:&quot;#beyondAI&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#49 - The Illusion of AI Quick Wins (Part 1/2)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-07-06T09:51:50.731Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Ykkf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/the-illusion-of-ai-quick-wins-part&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:167634476,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!A2W_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive the latest AIPM articles.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>Why Most &#8220;Quick Wins&#8221; Aren&#8217;t Wins at All</strong></h2><p>It&#8217;s easy to understand why the idea of AI &#8220;quick wins&#8221; has captured the imagination of so many organizations, especially in an era where every company feels the pressure to demonstrate technological innovation and where leadership eagerly seeks visible proof points to justify investments in artificial intelligence.</p><p>The notion that a small team, armed with modern APIs and a few clever prompts, can deliver something that appears intelligent, valuable, and even transformative within a matter of days or weeks is not merely appealing &#8212; it&#8217;s intoxicating, because it suggests that the barriers to impact have all but vanished.</p><p>Yet the sobering reality, which emerges time and again in the aftermath of these rapid development sprints, is that most of these so-called quick wins are not wins at all, at least not when measured against the full economic equation required to sustain an AI product in a live business environment.</p><p>For while it is true that the cost and speed of initial development have fallen dramatically, this reduction has done little to eliminate the far more stubborn costs that arise once a solution is expected to perform reliably, safely, and in alignment with business objectives <strong>over time</strong>.</p><p>Indeed, it is precisely because prototypes can be built so quickly and with such dazzling technical fluency that organizations often rush ahead, mistaking technical feasibility for product viability, and underestimating the layers of complexity that lie between a working demo and a sustainable, value-generating product.</p><p>One of the most insidious illusions of AI quick wins is that the moment a model produces correct outputs in controlled scenarios, it is ready for real-world deployment, when in truth, the journey from demo to product is where the true costs &#8212; and risks &#8212; begin to accumulate.</p><p><strong>Consider the governance overhead alone:</strong> the legal teams who must review every possible output to ensure regulatory compliance, the security audits required to protect sensitive data, the need for explainability if the AI&#8217;s decisions affect customers or employees, and the documentation that must be created to satisfy auditors and internal stakeholders alike.</p><p>Layered atop these governance concerns is the relentless evolution of business processes themselves, because no matter how elegant an AI model may be, the realities of enterprise life dictate that business rules change, systems are upgraded, organizational priorities shift, and what was true yesterday might be obsolete tomorrow &#8212; all of which demand ongoing adjustments, retraining, and validation work that quietly consume time, budget, and human attention.</p><p><strong>Then there is the human side of AI, which is so often overlooked in the rush to build:</strong> the time and effort required to train users, to earn their trust, to manage their expectations, and to support them when the AI inevitably produces an output that confuses, disappoints, or outright fails to meet the nuanced needs of their real-world tasks.</p><p>Even the most straightforward tools, once released into production, attract a steady stream of enhancement requests, support tickets, edge cases, and new use scenarios that were never foreseen during the initial build, each demanding attention, prioritization, and &#8212; ultimately &#8212; additional cost.</p><p>And so what begins as a deceptively affordable technical exercise becomes, over time, an ongoing drain on organizational resources, often far exceeding any value the initial prototype seemed poised to deliver.</p><p><em>This is the essence of why most quick wins are not wins at all:</em> because they are evaluated only through the narrow lens of build effort, without a sober analysis of the cost of ownership and the total economic impact required to sustain them as real products.</p><p>An AI solution that costs &#8364;10,000 to build but &#8364;150,000 annually to maintain, while delivering only &#8364;50,000 of value, is not a win &#8212; it is a liability masquerading as innovation.</p><p>The true discipline of AI Product Management, therefore, lies not in celebrating how quickly something can be built, but in developing the discernment to know which solutions are worth owning, and in having the courage to say no to ideas that, while technically feasible, cannot deliver sustainable, economic value once all costs are accounted for.</p><p>Until we embed this discipline into our processes, we risk chasing the illusion of quick wins, filling our roadmaps with projects that impress in demos but quietly erode resources, distract teams, and ultimately fail to justify their existence in the harsh light of economic reality.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BKmF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BKmF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BKmF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1584314,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/167635124?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BKmF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!BKmF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15d0f019-3a9b-47f1-9bf5-ec472cc7365d_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>How to Avoid the Illusion</strong></h2><p>It&#8217;s one thing to recognize the illusion of AI quick wins; it&#8217;s quite another to develop the discipline and the practical methods required to avoid falling into that trap, especially in organizations eager to proclaim their leadership in AI innovation and where the pressure to deliver visible results can sometimes override sober assessment of long-term sustainability.</p><p>Yet avoiding this illusion is not merely a matter of caution &#8212; it is a fundamental responsibility for anyone tasked with building AI products, because the costs of getting it wrong are not confined to technical rework but extend into wasted resources, eroded trust among stakeholders, and the opportunity cost of having pursued initiatives that were never destined to generate meaningful returns.</p><p>So how, then, does one navigate the seductive pull of building quick, impressive prototypes while protecting the organization from the hidden liabilities of ownership?</p><h4><strong>Separate the Cost of Building from the Cost of Owning</strong></h4><p>The first and perhaps most crucial step is to consciously and systematically separate the cost of building from the cost of owning, treating them as two entirely distinct phases of the product&#8217;s economic life cycle.</p><p>It is no longer sufficient to ask only how many weeks it will take a developer to connect to an LLM API or to produce a functioning prototype; we must equally ask how many people will be required to maintain it, how frequently the model or prompts will need to be updated, and how many systems it must integrate with &#8212; each of which contributes directly to the total cost of ownership.</p><p>This separation forces product teams and stakeholders alike to look beyond the glamour of the initial demo and to confront the practical realities of sustaining a product once it enters the complex, ever-changing environment of real-world operations.</p><h4><strong>Ask the Right Questions Early</strong></h4><p>Avoiding the illusion begins with asking better questions &#8212; questions designed not merely to confirm technical feasibility but to expose the true shape of the ownership burden that will inevitably follow any AI solution.</p><p>Questions such as:</p><ul><li><p>How dynamic is the problem domain? Do rules, policies, or user needs change frequently?</p></li><li><p>How many users will this serve, and how diverse are their roles and expectations?</p></li><li><p>Will this solution need to integrate into other systems, and if so, how tightly?</p></li><li><p>Does the solution handle personal or regulated data, triggering privacy or compliance requirements?</p></li><li><p>How critical is this solution to business operations, and what is the risk if it fails?</p></li><li><p>Are there existing teams prepared to own and support this product long-term?</p></li></ul><p>Each of these questions serves as a signal for potential costs that may not be visible in the initial build estimate but will certainly materialize once the product is live.</p><h4><strong>Keep ROI Front and Center</strong></h4><p>Yet even as we strive to separate costs and estimate them with as much realism as possible, there remains one final, indispensable lens through which every AI initiative must ultimately be viewed: the lens of return on investment.</p><p>For no matter how elegantly we may build, or how rigorously we may estimate the costs of owning an AI product, these efforts are meaningful only insofar as they allow us to judge whether the value created by the product will, in the end, exceed the total costs required to build and sustain it.</p><p>Having a clear ROI mindset is not a constraint but a strategic compass, one that empowers product leaders and organizations to make deliberate choices about which AI ambitions are truly worth pursuing.</p><p>When we force ourselves to ask &#8212; even at the earliest stages &#8212; how much economic benefit a solution might realistically deliver, and when we set that potential benefit against both the known costs of building and the less visible but equally real costs of ownership, we transform decision-making from an exercise in technological enthusiasm into a discipline of informed, economically grounded choices.</p><p>It is through this lens of ROI that we gain the courage and clarity to prioritize not merely what we can build, but what is genuinely worth owning &#8212; and it is this discipline that will distinguish the fleeting illusions of AI quick wins from the sustainable successes that endure and create true business value.</p><h4><strong>Align Technical and Product Perspectives</strong></h4><p>One of the most significant risks in AI development is the disconnect between technical teams, who are often eager to demonstrate what is possible, and product leaders, who are responsible for ensuring that solutions translate into sustainable value.</p><p>To avoid the illusion of quick wins, these two groups must collaborate closely from the outset, jointly evaluating both the technical feasibility and the economic sustainability of any proposed initiative.</p><p>Technical teams must be transparent about the assumptions and hidden complexities in their solutions, while product leaders must challenge optimistic timelines and push for clarity around governance, integration, and support costs.</p><p>It is in this partnership &#8212; between those who build and those who own &#8212; that the best defenses against the illusion are forged.</p><h4><strong>The Service Provider Perspective: Shifting, Not Erasing, Ownership Costs</strong></h4><p>As we grapple with the challenge of distinguishing between the cost of building and the cost of owning AI products, it&#8217;s worth pausing to consider a business model that seems, at first glance, to escape the burden of ownership altogether: the model of the service provider.</p><p>There exists a substantial segment of the technology landscape composed of companies whose business is not to own products themselves, but to build solutions on behalf of others, delivering precisely what has been specified, collecting their fees, and moving on to the next project.</p><p>For these service providers, the economic calculus appears refreshingly simple: scope is defined, requirements are gathered, code is written, the solution is delivered, and payment is rendered. In this model, the cost of ownership &#8212; with all its complexities and potential liabilities &#8212; seemingly vanishes from the service provider&#8217;s concerns, because their financial and operational obligations end the moment the solution is handed over.</p><p>Yet this absence of ownership costs in the service provider&#8217;s books does not mean those costs disappear from the world. They are merely shifted &#8212; often in full &#8212; onto the shoulders of the client who commissioned the work.</p><p>For the buying company, the reality remains unchanged: every AI solution brought into production becomes part of a living ecosystem that must be integrated, supported, governed, and maintained over time. The same challenges apply, whether the code was written by internal developers or by an external partner.</p><p>Data pipelines still require monitoring and updates. Models still drift and demand retraining. Compliance audits still loom, demanding documentation and explainability. Users still generate tickets, seek enhancements, and raise concerns when outputs fail to align with their expectations. And when the inevitable changes in business processes arrive, someone must be there to adjust the solution so that it continues to deliver value rather than becoming a liability.</p><p>In many ways, the illusion of quick wins can be even more seductive in service-driven contexts, because clients may mistakenly believe that by outsourcing the build, they have also outsourced the burden of ownership.</p><p>But the truth is unavoidable: the cost of ownership can be transferred, but it cannot be eliminated.</p><p>It is a debt that must always be paid &#8212; either by the builder or by the buyer &#8212; and wise organizations recognize that even when they choose to partner with service providers, they must plan and budget not only for the cost of building, but for the far greater and longer-lasting cost of owning the solution once it comes home.</p><p>Thus, whether acting as a product company, a service provider, or a client engaging external partners, the fundamental discipline remains the same: to distinguish the immediate thrill of building from the ongoing responsibility of ownership, and to make choices that ensure the solutions we deploy are not merely technically impressive, but economically sustainable.</p><h4><strong>A Discipline, Not a Constraint</strong></h4><p>Avoiding the illusion of AI quick wins is not about stifling innovation or rejecting the incredible possibilities that modern AI unlocks; rather, it is about practicing a discipline that ensures the solutions we choose to build are not only technologically possible but economically sustainable.</p><p>It is about remembering that in the realm of AI products, the real measure of success is not how quickly something can be built, but whether it can endure, scale, and deliver value over time without consuming more resources than it is worth.</p><p>And it is about recognizing that in the end, the true quick win is not the speed of development, but the wisdom to build only what is worth owning.</p><h2>Closing Thoughts</h2><p>In the race to stake a claim in the future of artificial intelligence, it is understandable &#8212; and perhaps inevitable &#8212; that we find ourselves captivated by the apparent ease and speed with which AI solutions can now be built, fueled by the astonishing capabilities of modern language models and the growing arsenal of accessible developer tools that promise to transform even modest ideas into impressive technical demonstrations.</p><p>Yet amidst this newfound agility, there lies a fundamental truth that every AI product leader must carry forward: <strong>an AI solution that can be built quickly is not necessarily an AI product worth owning.</strong></p><p>For while it is undeniably exhilarating to create something that works, that speaks fluently, that classifies data or forecasts outcomes with mathematical precision, the real test of success lies not in the initial build, but in the far less visible domain of sustained operation, governance, and economic viability.</p><p>It lies in the long, patient work of ensuring that what we deploy today remains relevant, accurate, and trustworthy tomorrow &#8212; that it continues to deliver value without quietly accumulating costs that outweigh any benefits it was meant to produce.</p><p>This is the quiet discipline at the heart of AI Product Management: the insistence on looking beyond the glittering surface of technical feasibility to examine the deeper, more demanding questions of adoption, integration, governance, and total cost of ownership.</p><p>It is the willingness to say no to ideas that cannot justify their presence in a real-world business environment, no matter how technically brilliant they may appear in a controlled demo.</p><p>It is the courage to protect an organization&#8217;s focus and resources, to ensure that the AI products we choose to build are those that can not only be delivered swiftly but sustained wisely.</p><p>Because ultimately, the role of AI product professionals is not merely to prove what is possible, but to guide organizations toward building only those solutions that stand the test of time, that create more value than they consume, and that contribute meaningfully to the strategic objectives we exist to serve.</p><p>Now, my question is: </p><p><strong>How many real AI Quick Wins have you ever seen? </strong></p><p>JBK &#128330;&#65039;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive the latest AIPM articles.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#49 - The Illusion of AI Quick Wins (Part 1/2)]]></title><description><![CDATA[Cheap to build. Expensive to own.]]></description><link>https://www.jaserbk.com/p/the-illusion-of-ai-quick-wins-part</link><guid isPermaLink="false">https://www.jaserbk.com/p/the-illusion-of-ai-quick-wins-part</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 06 Jul 2025 09:51:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ykkf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong></p><p>Nowadays, every developer feels empowered to build their own AI solution.</p><p>At least a certain type of AI solution, the kind that doesn&#8217;t require training a model from scratch.</p><p>With the rise of powerful LLM providers, building AI has become remarkably accessible. You browse the provider&#8217;s docs, write a few lines of code, and within hours you&#8217;ve built something that looks like magic.</p><p>The cost of building AI has never been lower.</p><div class="pullquote"><p>But just because an AI solution works technically doesn&#8217;t mean it&#8217;s an AI product.</p></div><p>An AI product is an economic proposition. It&#8217;s not just code that runs, it&#8217;s a solution that solves a real problem for a specific user group, earns adoption, and generates enough value to justify its costs. Ultimately, it&#8217;s about building something that makes more money than it costs to sustain.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new articles.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>And this is where many builders get blindsided.</p><p>People forget that real AI products carry two types of costs:</p><ul><li><p>The cost to <strong>build</strong> them.</p></li><li><p>And the cost to <strong>own</strong> them.</p></li></ul><p>And once you understand this, you realize that the majority of AI use cases we see today would never make it past the idea stage if we honestly accounted for the true costs of ownership.</p><p>That&#8217;s what my article today is about: </p><div class="pullquote"><p><em>The illusion of AI quick wins. Part 1 - The Problem.</em></p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ykkf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ykkf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ykkf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1099285,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/167634476?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ykkf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Ykkf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6faed6c8-b6b9-419f-b2ae-e59d9b0306a0_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Rise of &#8220;Cheap to Build&#8221; AI</strong></h2><p>We&#8217;re living through a remarkable moment in technology history, one in which the barrier to building AI solutions has dropped so dramatically that it&#8217;s now possible for nearly any developer, regardless of their background in machine learning, to assemble something that looks intelligent in a matter of hours.</p><p>The reason for this sudden accessibility lies, of course, in the emergence of large language models and the thriving ecosystem of APIs and developer tools that surround them, making it feasible to build conversational agents, text analyzers, summarization tools, or countless other applications without ever training a single model from scratch.</p><p>Where once deploying AI required deep knowledge of algorithms, the painstaking preparation of training data, and expensive computational resources, it now often requires little more than reading API documentation, crafting a few prompts, and wiring the output into an existing user interface or backend service.</p><p>This shift has created an intoxicating sense of speed and empowerment among developers, because for the first time, the dream of embedding intelligence into digital products feels tangible, immediate, and relatively inexpensive to prototype.</p><p>Yet this very ease has become a double-edged sword, because while the act of building has become democratized and dramatically cheaper, it can foster a dangerous illusion: <em>that the low cost and speed of initial development somehow translates into a sustainable, low-cost product over the long term.</em></p><p>One can build a working demo in a hackathon, showcase it internally, and impress stakeholders with the apparent sophistication of natural language understanding or smart decision-making &#8212; and in doing so, create the impression that the AI problem is &#8220;solved&#8221; simply because the technical proof of concept runs without errors.</p><p>However, the reality that lurks beneath the surface is that the cost of writing code and hooking into an AI API is often the smallest fraction of what it takes to turn an AI solution into a real product &#8212; a product that not only functions reliably but integrates into workflows, complies with governance requirements, adapts to shifting business needs, and continues to deliver economic value year after year.</p><p>The temptation to believe in AI &#8220;quick wins&#8221; stems from this new reality: that building a prototype has become so deceptively easy, it masks the far greater costs and complexities involved in truly owning and operating an AI product over time.</p><p>And unless we consciously separate the cost of building from the cost of owning, we risk filling our backlogs and our organizations with solutions that look brilliant on the surface but quietly drain resources, erode trust, and fail to deliver a sustainable return on investment.</p><h2><strong>Why an AI Solution &#8800; an AI Product</strong></h2><p>It&#8217;s a subtle but critical distinction &#8212; one that often gets overlooked in the current rush to showcase technological capability &#8212; that an AI solution, impressive though it may be from a purely technical standpoint, is not by default an AI product, because a product is defined not merely by its existence but by its ability to consistently solve a problem for real users, in a way that is sustainable, adopted, and ultimately economically viable.</p><div><hr></div><p><strong>Related Article</strong></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c73631e7-e869-49f6-89e8-2b5559c89406&quot;,&quot;caption&quot;:&quot;#beyondAI&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Path to AI Product&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-07-28T12:30:53.825Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Tln7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a5df01-d314-4db6-97c9-2b2844daaa1d_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/the-path-to-ai-product&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:147089662,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:5,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!A2W_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>While an AI solution might be a clever script, a working demo, or a functional prototype that answers questions, generates text, or classifies data with uncanny accuracy, it remains fundamentally an internal artefact until it crosses the far more challenging threshold of delivering value to a defined group of users who choose, repeatedly and willingly, to integrate it into their daily tasks.</p><p>An AI product, in contrast, exists within an ecosystem of human expectations, business objectives, and operational realities; it must earn trust, fit seamlessly into workflows, comply with often stringent governance requirements, and deliver an experience robust enough that users are not only willing to try it once, but to depend on it over time, perhaps even to the point of paying for it &#8212; either directly or through its contribution to broader business performance.</p><p>This transition from &#8220;solution&#8221; to &#8220;product&#8221; represents the true crucible of AI product management, because it is here that the technical marvel of AI collides with the stubborn complexities of human behavior, regulatory constraints, and the shifting sands of organizational priorities.</p><p>Too often, teams celebrate the technical feasibility of an AI initiative as though that alone were sufficient proof of its value, pointing to a working chatbot, an elegant classification model, or an automated report as evidence that the problem has been solved &#8212; when in reality, these artifacts are little more than prototypes until they can prove that users actually want to adopt them, that they can survive in production environments, and that the economics of ongoing operation make sense when weighed against the benefits they bring.</p><p>I have seen, time and again, solutions that worked beautifully in a controlled testing environment but fell apart in the real world, not because the underlying AI was flawed, but because the product as a whole lacked the infrastructure, the support mechanisms, and the organizational alignment necessary to transform a clever idea into a sustainable asset.</p><p>Consider, for instance, a chatbot designed to answer policy questions within a large enterprise. While the technical implementation might be straightforward &#8212; wiring an LLM API to a document database, perhaps &#8212; the true challenge arises when policies change, when users begin asking nuanced or politically sensitive questions, or when legal and compliance teams intervene to scrutinize every possible hallucination or misinterpretation that the model might produce.</p><p>Or imagine a forecasting model built to predict customer churn, which dazzles stakeholders with its precision during a pilot phase, only to collapse under the weight of integrating into live systems, dealing with data refreshes, and explaining predictions in terms that business users can trust and act upon.</p><p>The difference between an AI solution and an AI product, therefore, is not just a matter of technical sophistication, but a question of economic sustainability and operational maturity &#8212; the capacity to deliver value continuously, safely, and in a manner that justifies both the initial investment and the ongoing cost of ownership.</p><p>This is why, in the discipline of AI Product Management, we must always look beyond the seductive glow of working demos, and insist on asking the harder questions: </p><ul><li><p>Who will use this? </p></li><li><p>Will they truly adopt it? </p></li><li><p>How often will it need to change? </p></li><li><p>What governance or compliance hurdles must it clear? </p></li><li><p>And above all &#8212; will it generate more value than it costs to build and maintain?</p></li></ul><p>Because in the end, it is not the elegance of our code, nor the cleverness of our models, that defines success in AI, but our ability to build products that persist, scale, and deliver a return on the resources invested in them &#8212; products that serve real needs, in the real world.</p><div><hr></div><p><strong>Related article</strong></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;7f67ec77-5915-4d33-9caf-b884fe60de33&quot;,&quot;caption&quot;:&quot;In AI products, it&#8217;s dangerously easy to pass every technical test &#8212; and still fail the user.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why AI Evaluations Have Never Been Optional for AI Product Managers&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-27T11:18:19.436Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!DsaM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/why-ai-evaluations-have-never-been&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:162249308,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!A2W_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h2>The Two Costs of Real AI Products</h2><p>When we speak of AI products, it&#8217;s tempting to focus almost entirely on the exhilarating act of building &#8212; on the prototypes, the architecture diagrams, the proof-of-concepts that light up demo days and reassure stakeholders that progress is being made.</p><p>And yet, the true story of an AI product is told not merely in the cost and time it takes to build it, but in the often invisible, relentless costs that follow long after the first lines of code are written, costs that determine whether the product becomes a sustainable asset or an expensive curiosity that fades from memory once the initial excitement has worn off.</p><div class="pullquote"><p>Every real AI product carries with it two fundamental categories of cost: </p><p>the cost of building, and equally &#8212; if not more importantly &#8212; the cost of owning.</p></div><h4><strong>The Cost of Building</strong></h4><p>The cost of building encompasses all the one-time efforts that go into transforming an idea into a functioning prototype or a first release.</p><p>It includes the hours spent by developers exploring data sources, selecting algorithms, testing prompts, designing user interfaces, and navigating the labyrinth of system integrations necessary to embed AI into existing business processes.</p><p>It covers the technical work of setting up pipelines, the collaboration sessions between product managers and data scientists to frame the problem correctly, and the sometimes intense periods of iteration required to move from a promising proof-of-concept to a solution robust enough to be demoed to stakeholders.</p><p>In many organizations today, thanks to the availability of powerful pre-trained models and API-driven services, this initial cost of building has plummeted, allowing teams to produce impressive prototypes at a fraction of what it would have cost only a few years ago.</p><p>And this, paradoxically, is precisely where the illusion of quick wins begins &#8212; because it creates the impression that the bulk of the work is done once a model produces credible outputs or an LLM responds with human-like fluency.</p><p>The truth, however, is that while building has become easier, it remains only the first, often smallest, part of the journey.</p><h3><strong>The Cost of Owning</strong></h3><p>It is in the cost of owning where the real weight of AI product development reveals itself &#8212; the weight that so often remains hidden during the euphoric days of building, only to emerge as an increasingly heavy burden in the months and years that follow.</p><p>Owning an AI product means maintaining not just the technical components &#8212; the models, the code, the integrations &#8212; but the entire ecosystem required to keep the product relevant, accurate, and safe in the face of continual change.</p><p>It means monitoring model performance to detect drift, updating prompts as business rules evolve, retraining models when new data becomes available, and ensuring that the AI&#8217;s outputs remain consistent with shifting regulatory requirements and legal standards.</p><p>It involves integrating the AI into production systems in a way that remains resilient even as upstream or downstream systems change, and preparing for the reality that what works perfectly in a lab environment may encounter unexpected edge cases or operational challenges in real-world conditions.</p><p>The cost of owning also includes the human side of AI: supporting users as they learn to trust and adopt new systems, providing documentation and training materials, handling requests for enhancements or bug fixes, and dealing with the inevitable questions and complaints that arise when AI makes errors or delivers results that users don&#8217;t fully understand.</p><p>Moreover, ownership carries with it the burden of governance &#8212; the processes of security reviews, legal assessments, and risk management, all of which are non-negotiable in enterprise environments, particularly when AI is involved in decisions that might affect customers, employees, or regulated business activities.</p><p>These costs of ownership are neither optional nor trivial. They are the ongoing price we pay for transforming clever prototypes into real products &#8212; products that not only work once but keep working, safely and reliably, over time.</p><p>This is why the notion of AI quick wins can be so dangerously seductive: because it blinds us to the reality that while the cost of building has indeed fallen, the cost of ownership has remained stubbornly high, and in many cases, has even increased as AI systems become more complex, regulated, and deeply integrated into the heart of business operations.</p><p>Until we account for both sides of the ledger &#8212; the cost of building and the cost of owning &#8212; we cannot truly judge whether an AI initiative is a quick win or a long-term liability in disguise.</p><h2><strong>Examples of the Ownership Trap</strong></h2><p>To truly appreciate why so many AI solutions, though cheap to build, become expensive to own, we need only look at the real-world examples that emerge time and again in enterprises attempting to harness the promise of artificial intelligence.</p><p>These are not failures of technology per se, for the algorithms often perform precisely as designed; rather, they are cautionary tales about what happens when the seductive ease of building blinds us to the relentless realities of ownership.</p><h4><strong>Example 1: The Chatbot That Kept Growing</strong></h4><p>Consider the seemingly innocuous decision to deploy a chatbot designed to help employees navigate internal policies, an initiative that, on paper, appeared to be a perfect quick win.</p><p>The technical work was modest: a few calls to an LLM API, some prompt engineering to ensure the bot referenced the correct documents, and a lightweight web interface for employees to submit questions.</p><p>Within weeks, the prototype was working well enough to be demoed to leadership, and its creators rightly felt a surge of pride &#8212; for here was an AI solution that could answer policy questions quickly and reduce the load on human support teams.</p><p>Yet, as soon as the chatbot went live, a different reality unfolded.</p><p>Employees, delighted by the initial utility, began asking increasingly complex questions that blended policy interpretation with subtle organizational politics &#8212; queries the model was never designed to handle and which introduced significant risks if answered incorrectly.</p><p>Meanwhile, the legal department intervened, demanding rigorous controls to ensure no confidential or outdated information was served, triggering a new wave of compliance reviews, prompt adjustments, and the need for an auditable log of every interaction.</p><p>Worse still, business units outside the original scope began requesting versions of the chatbot tailored to their own specialized policies, fragmenting the development effort and multiplying the maintenance burden.</p><p>What began as a small, low-cost experiment had now evolved into an ongoing product with legal risks, governance overhead, and a growing queue of change requests &#8212; a perfect example of how low initial build costs can mask the true cost of ownership.</p><h4><strong>Example 2: The Forecasting Model That Couldn&#8217;t Survive the Real World</strong></h4><p>Another example arises from the widespread enthusiasm for predictive modeling, particularly models designed to forecast critical business outcomes such as customer churn.</p><p>In one enterprise, a team built a sophisticated churn prediction model using historical customer data, leveraging advanced machine learning techniques that achieved impressive accuracy during testing.</p><p>The prototype dazzled stakeholders, who were eager to deploy it as a tool for proactive retention strategies.</p><p>However, as soon as the model was moved toward production, its creators discovered that the very data pipelines feeding it were prone to frequent schema changes, driven by evolving business definitions and new marketing initiatives.</p><p>Each time upstream systems changed, the model broke, requiring urgent intervention from data engineers and data scientists to re-map features and re-run validations.</p><p>Moreover, business users, once enthusiastic, began demanding clear explanations for why certain customers were flagged as high churn risks &#8212; explanations the model was ill-prepared to provide, especially under tight timelines.</p><p>What had seemed like a technical triumph quickly transformed into a fragile solution requiring constant care, communication, and firefighting &#8212; its ownership costs far exceeding the initial estimates.</p><h4><strong>Example 3: The One-Team Tool That Became Everyone&#8217;s Problem</strong></h4><p>A final example comes from a small tool built by a team to automate the categorization of customer feedback into topics for analysis.</p><p>Originally conceived as a simple internal solution, the AI model used text classification to tag feedback into a handful of business categories, helping one analytics team speed up their reporting.</p><p>Initially, the build was straightforward: the team fine-tuned an existing model, connected it to their feedback database, and produced a simple dashboard.</p><p>But success quickly brought attention.</p><p>Other departments, seeing the tool&#8217;s usefulness, requested their own categories, more languages, integration into enterprise reporting systems, and compliance reviews for customer data privacy.</p><p>Each request seemed small on its own &#8212; a new tag here, another language there &#8212; but together they transformed a low-maintenance script into an enterprise-grade product requiring dedicated ownership, funding, and continuous upgrades.</p><p>What started as a clever side project became a sprawling responsibility nobody had planned to sustain, draining resources that could have been focused on higher-value initiatives.</p><h4><strong>The Pattern Across All Examples</strong></h4><p>In each of these stories, the initial build was fast, inexpensive, and technologically feasible.</p><p>But the unseen costs &#8212; governance, integration complexities, user support, evolving requirements, and compliance obligations &#8212; turned these &#8220;quick wins&#8221; into enduring commitments, often without delivering proportional economic value.</p><p>This is the essence of the ownership trap: the seductive belief that because we can build AI solutions quickly and cheaply, they will naturally become sustainable products &#8212; when in reality, ownership costs often dwarf the initial effort and can transform even the most promising initiatives into long-term liabilities.</p><p>Understanding this trap is not simply a technical concern but a core responsibility of AI Product Management, because only by acknowledging and planning for the full cost of ownership can we ensure that the solutions we build become products that survive, scale, and create lasting value.</p><div><hr></div><blockquote><p>These examples reveal the hidden dangers behind so-called AI quick wins. But if building has become cheap, and owning remains expensive, how can we avoid falling into the same trap? That&#8217;s what I&#8217;ll explore in the next article.</p><p><em>The illusion of AI quick wins. Part 2 - The Solution.</em></p></blockquote><p></p><p>JBK &#128330;&#65039;</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new articles.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#48 - The Best AI Products Expect Errors]]></title><description><![CDATA[Designing AI Products Ready for Mistakes]]></description><link>https://www.jaserbk.com/p/the-best-ai-products-expect-errors</link><guid isPermaLink="false">https://www.jaserbk.com/p/the-best-ai-products-expect-errors</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 29 Jun 2025 12:30:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YKw6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong></p><p>Some years ago, I would never have expected that almost everyone would understand what I mean when I talk about the cost of wrong AI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>These days, nearly everyone has paid that cost, at least with their nerves, while interacting with an AI. We&#8217;ve all faced frustrating moments with the most common AI tools: chatbots that give nonsense answers, recommendation engines suggesting irrelevant content, or voice assistants misunderstanding simple commands.</p><p>But there&#8217;s a cost that goes far beyond personal annoyance, frustration, or headaches. This deeper cost is felt by businesses that integrate AI into their processes and workflows.</p><p>With AI products, there&#8217;s a critical dimension that sits above all the usual measures of product success: the quality of the AI&#8217;s output.</p><p>An AI product might address a user&#8217;s pain point beautifully. It might have a sleek, intuitive interface and a high-performing AI model. But ultimately, the product&#8217;s value stands or falls on how reliable, accurate, and appropriate the AI&#8217;s output is. And no matter how well the model performs, it is never perfect.</p><p>Whether you&#8217;re working with predictive models or generative systems, the AI model is the beating heart of your solution. Its outputs define the product&#8217;s quality in ways that are far more volatile and impactful than most traditional software features.</p><p>This is why AI product success hinges first and foremost on the quality of the outputs your model generates, assuming of course that the problem you&#8217;re solving is genuinely valuable to a specific user group.</p><p>Long before UI polish, feature richness, or clever pricing strategies, the key question remains:</p><p><em>Can users trust what the AI produces?</em></p><p>Because when AI goes wrong, the cost to the business can quickly exceed all the times the AI was right and beneficial.</p><p>That&#8217;s why every AI Product Manager needs to learn how to measure, monitor, and improve AI output quality.</p><p><strong>This is what today&#8217;s article is about: the cost of wrong AI.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YKw6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YKw6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YKw6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2014588,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/167054377?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YKw6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!YKw6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F795b1539-e1bd-4ed6-bbf2-617396a21259_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>All I Know: Not All AI Is Measured Alike &#8212; and That&#8217;s Enough for Now</h3><p>Not all AI is created equal, and neither are its outputs.</p><p>A predictive model might forecast a sales figure, classify a customer&#8217;s sentiment, or flag a suspicious transaction. Its output is usually structured, numeric, or categorical&#8212;something you can measure directly against the truth. Metrics like accuracy, precision, recall, F1 scores, or ROC curves are well-established and relatively straightforward to track.</p><p>Generative AI, however, operates in a completely different arena. Its outputs are creative, open-ended, and often subjective. A large language model might draft marketing copy, summarize a report, or generate code. An image model might produce new artwork or product mockups. In these cases, the &#8220;correctness&#8221; of the output isn&#8217;t always a simple yes-or-no answer. Instead, it sits on a spectrum. The quality of these outputs can depend on style, tone, factual accuracy, coherence, relevance, and even subtle nuances like empathy or humor.</p><p>Because of these differences, the way we assess output performance varies dramatically across AI types:</p><ul><li><p>For predictive AI, we measure how close the output is to a known ground truth.</p></li><li><p>For generative AI, we often have to define what &#8220;<em>good</em>&#8221; looks like for our specific use case, and then find ways to evaluate it&#8212;whether through human assessment, automated checks, or user feedback signals.</p></li></ul><p>As AI Product Managers, we need to be fluent in these differences. And by fluency, I mean first understanding what type of AI the solution requires, and second, knowing how its performance can be measured.</p><p><strong>Let me also clarify one important point.</strong> Even though I&#8217;ve been in the AI product management field for over ten years, I haven&#8217;t worked with every AI type out there. I&#8217;m not fluent in measuring the performance of all possible AI solutions. My experience is mainly in building classical prediction models and those generating insights, such as churn prediction and sales forecasting. And natural language processing&#8212;including large language models, which are part of NLP.</p><p>But the fact that I&#8217;m aware of the nuances and differences among AI types, and that each requires its own methodology for evaluating performance, helps me make informed decisions about where to focus my next chapters of learning. It also prepares me for tackling new AI product challenges that might require different solution types in the future.</p><p><strong>That&#8217;s the purpose of this article:</strong> to give you greater awareness of these distinctions, so you can navigate the cost and complexity of managing AI products with more confidence.</p><h3><strong>What Is the Cost of Wrong AI?</strong></h3><p>When people talk about &#8220;AI errors,&#8221; they often imagine a chatbot saying something silly or a model predicting a slightly off number. But in the context of internal AI products, the cost of wrong AI is much deeper and far-reaching.</p><p>As you might already know, I mainly write about AI product management in the context of building AI products within and for enterprises. In this world, I have built my own <strong>latticework of mental models.</strong> One of those mental models tells me that regardless of where I want to implement AI in a company, it always touches on one of two core types of processes: those that generate revenue or those that protect revenue.</p><p>I&#8217;m yet to experience that there could be a truly separate third category. </p><p>So, in this very simplified world of enterprise processes, it quickly becomes apparent that even a single change within these processes either reduces or increases an outcome. And it&#8217;s here that the real cost of wrong AI can often be quantified.</p><p>Let&#8217;s break it down.</p><h4>1. The Cost Of Wrong AI in Revenue-Generating Processes</h4><p>Imagine an AI model used in a sales forecasting process. Its job is to predict how much revenue each product line will generate next quarter. If that model consistently overestimates demand:</p><ul><li><p>The business might overproduce inventory, tying up capital unnecessarily.</p></li><li><p>Sales teams might push the wrong products, missing actual market demand.</p></li><li><p>Marketing budgets could be allocated to lower-impact campaigns.</p></li></ul><p>And the result? Missed revenue targets, higher operational costs, and reduced trust in the analytics or product teams who deployed the model. At least, if anyone ever discovers that it&#8217;s your solution causing the problem. But that&#8217;s a different topic altogether. :)</p><h4>2. The Cost Of Wrong AI in Revenue-Protecting Processes</h4><p>Consider fraud detection&#8212;a classic example of a revenue protection process. An AI model might analyze transactions to flag suspicious behavior. If the model generates too many false positives:</p><ul><li><p>Legitimate customer transactions get blocked.</p></li><li><p>Call centers become overwhelmed with complaints.</p></li><li><p>Customers lose trust and might take their business elsewhere.</p></li></ul><p>I think the point should be clear now. </p><h4><strong>A Final Example: LLM-Based Tender Assistant</strong></h4><p>Let&#8217;s take one more example&#8212;this time from the world of generative AI. Imagine you&#8217;re building an internal LLM-based Tender Assistant.</p><p>The goal is to help a tender management team quickly analyze and summarize large, complex tender documents from potential partners or clients. On paper, this sounds like the perfect productivity boost. But here&#8217;s where things can go wrong:</p><ul><li><p>The LLM might hallucinate facts, inserting details about tender requirements that don&#8217;t exist in the original documents.</p></li><li><p>Important legal or financial clauses might be omitted or misinterpreted in the summary.</p></li><li><p>The assistant might phrase recommendations too confidently, making users trust outputs without verifying them.</p></li></ul><p>In a tender process, mistakes like these can be costly:</p><ul><li><p>Teams could base their bid strategies on incorrect information.</p></li><li><p>The company might miss critical compliance requirements.</p></li><li><p>Misunderstandings could damage relationships with potential clients or partners.</p></li></ul><p>Even if the AI only makes small errors, the cost of cleaning up the mess&#8212;through manual document reviews, legal checks, and rework&#8212;can wipe out any productivity gains the solution promised. Worse still, if decision-makers lose trust in the assistant, adoption drops, and the entire investment risks becoming shelfware.</p><p>This is exactly why the cost of wrong AI goes far beyond just technical performance. In internal enterprise products, it&#8217;s about operational disruption, financial risks, and the delicate trust between business teams and the technology they rely on.</p><h4>The Hidden Costs Behind These Examples</h4><p>Across all these examples, there&#8217;s a common theme:</p><ul><li><p>Errors don&#8217;t just produce slightly &#8220;off&#8221; numbers&#8212;they ripple through processes, triggering downstream costs that can far exceed any initial savings promised by AI.</p></li><li><p>Fixing mistakes often means manual rework, retraining models, and eroding stakeholder trust, which can slow future AI adoption.</p></li></ul><p>This is why, in internal enterprise environments, the cost of wrong AI is rarely just technical. It&#8217;s operational, financial, and political.</p><p>Understanding where your AI product sits in this landscape&#8212;and what processes it touches&#8212;is the first step in quantifying the true cost of errors.</p><h3><strong>You Cannot Avoid Wrong AI, But You Can Mitigate the Risk</strong></h3><p>You might already have understood: There is no such thing as a perfect AI. It simply isn&#8217;t possible.</p><p>We use machine learning algorithms for problems where ordinary algorithms fail to deliver a proper answer within a reasonable amount of time. These problems are often so complex that you can&#8217;t simply dictate rules for how to handle every single case. There are simply too many variations, exceptions, and edge cases.</p><p>Machine learning algorithms, instead of relying on predefined rules written by humans, try to make sense of data and discover as many patterns and rules as possible on their own. But this also comes at a cost.</p><p>The cost is that we will inevitably get answers with some degree of error. And this degree of error is something we, as AI product teams, need to keep in mind at every moment.</p><p>The most successful AI products are those that incorporate strategies to cope with these errors.</p><h4><strong>How to Build AI Products Ready for Mistakes</strong></h4><p>So, how do you build AI products that stay successful despite inevitable errors?</p><p><strong>1. Know Where Errors Matter Most</strong></p><p>Not every mistake is equally significant. Some errors are merely annoying, while others can trigger real financial, legal, or reputational damage. As an AI Product Manager, your first job is to figure out where errors in your AI system would cause the biggest harm so you can prioritize mitigation efforts where it matters most.</p><p>&#9989; Predictive AI:</p><ul><li><p>Critical when predictions directly drive business actions, like fraud detection, credit scoring, or forecasting.</p></li><li><p>Errors here can have measurable financial or regulatory consequences.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Equally important but different in nature. Mistakes often mean hallucinations, factual inaccuracies, or off-brand content.</p></li><li><p>E.g. a chatbot offering incorrect legal advice, or an image model generating inappropriate visuals.</p></li></ul><p><strong>2. Keep Humans in the Loop</strong></p><p>AI alone isn&#8217;t enough, especially in high-risk situations. Successful AI products are designed so humans can step in to review, correct, or override AI outputs where necessary. This not only prevents costly mistakes but also builds trust with users who know they&#8217;re not entirely at the mercy of the machine.</p><p>&#9989; Predictive AI:</p><ul><li><p>Less common in high-volume, low-risk predictions but critical in high-stakes use cases.</p></li><li><p>E.g. financial approvals, medical diagnoses, security alerts.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Essential because generative outputs can be unpredictable and subjective.</p></li><li><p>E.g. humans reviewing marketing copy, legal summaries, or code before release.</p></li></ul><p><strong>3. Monitor Performance Continuously</strong></p><p>AI isn&#8217;t static. Models degrade over time as real-world data shifts or new business challenges emerge. Successful AI products have monitoring systems in place to catch drops in performance early, so issues can be fixed before they cause significant harm.</p><p>&#9989; Predictive AI:</p><ul><li><p>Standard practice. Retrain models regularly as underlying data changes.</p></li><li><p>E.g. changes in customer behavior affecting churn models.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Also critical, but more complex.</p><ul><li><p>Track hallucination rates.</p></li><li><p>Monitor factual accuracy.</p></li><li><p>Watch for toxic or biased outputs.</p></li></ul></li><li><p>Tools like automated evals and red-teaming are increasingly used to help.</p></li></ul><p><strong>4. Educate Your Users</strong></p><p>A critical part of any AI product&#8217;s success is teaching users what the system can and can&#8217;t do, how to interpret its outputs, and when to be cautious. </p><p>&#9989; Predictive AI:</p><ul><li><p>Users need to understand that predictions are probabilities, not certainties.</p></li><li><p>Helps avoid poor decisions based on overconfidence in model outputs.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Absolutely crucial. Generative outputs can appear impressively fluent yet be entirely wrong.</p></li><li><p>Users should treat outputs as drafts rather than final truth, and know when to verify information.</p></li></ul><p><strong>5. Design Escape Routes</strong></p><p>When AI goes wrong, users need a way out. Successful AI products include features that let users easily reverse decisions, escalate problems, or switch back to manual processes. Designing for graceful failure prevents frustration and loss of trust.</p><p>&#9989; Predictive AI:</p><ul><li><p>Important for high-stakes decisions. Allow manual overrides or alternative workflows.</p></li><li><p>E.g. letting a human analyst confirm a flagged fraud alert.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Absolutely essential. Users must be able to reject, edit, or regenerate content.</p></li><li><p>E.g. a &#8220;Regenerate&#8221; button for a chatbot answer, or clear disclaimers on sensitive outputs.</p></li></ul><p><strong>6. Quantify Risk and Communicate Transparently</strong></p><p>Finally, successful AI product management means being honest about risk. Don&#8217;t hide limitations or pretend your AI is perfect. Instead, quantify how often errors occur, what kinds of harm they might cause, and how you&#8217;re reducing those risks. Transparency builds trust and helps stakeholders make informed decisions about using AI.</p><p>&#9989; Predictive AI:</p><ul><li><p>Often well-established practice. Stakeholders expect error rates and performance metrics.</p></li><li><p>E.g. ROC curves, precision-recall trade-offs.</p></li></ul><p>&#9989; Generative AI:</p><ul><li><p>Needs extra emphasis because errors are less predictable and often subjective.</p></li><li><p>Stakeholders must understand risks like hallucinations, bias, and tone issues, and the cost of mitigating them.</p></li></ul><h3><strong>Applying the Strategies: The Tender Assistant Example</strong></h3><p>Let&#8217;s make this real. Let&#8216;s take that LLM-based Tender Assistant for an enterprise example from above. Its job is to analyze large, complex tender documents and produce useful outputs such as:</p><ul><li><p>Summaries of lengthy legal or technical requirements</p></li><li><p>Lists of critical compliance obligations</p></li><li><p>Suggested draft responses for tender submissions</p></li><li><p>Risk highlights based on tender clauses</p></li></ul><p>On paper, it sounds like a dream tool for efficiency. But here&#8217;s where wrong AI can become costly &#8212; and how each of our strategies helps manage the risk.</p><p><strong>1. Know Where Errors Matter Most</strong></p><p>The first step is to pinpoint exactly where mistakes from the Tender Assistant would hurt the business most.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>Summaries might omit crucial requirements, leading to non-compliant bids.</p></li><li><p>The AI could hallucinate requirements that don&#8217;t exist in the documents.</p></li><li><p>Drafted responses might contradict company policy or misstate legal positions.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Map the tender workflow and identify critical outputs where errors would have legal, financial, or reputational consequences.</p></li><li><p>Prioritize rigorous checks for those outputs, rather than treating every output equally.</p></li></ul><p><strong>2. Keep Humans in the Loop</strong></p><p>No AI model should independently drive high-stakes decisions in tender processes.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>Tender content often involves legal, financial, and commercial nuances the AI might not fully grasp.</p></li><li><p>Users might wrongly assume AI outputs are legally vetted.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Design the product so all AI outputs are clearly marked as drafts.</p></li><li><p>Require human review and approval before finalizing summaries or tender responses.</p></li><li><p>Provide confidence scores or flags for sections the AI is uncertain about.</p></li></ul><p><strong>3. Monitor Performance Continuously</strong></p><p>LLMs can degrade in quality over time as business language, legal standards, or tender formats evolve.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>The model might perform well initially but start hallucinating or omitting details as document styles change.</p></li><li><p>Undetected errors could slip into production workflows.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Establish routine evaluations on fresh tender documents to check:</p><ul><li><p>Hallucination rates</p></li><li><p>Omission of key clauses</p></li><li><p>Consistency in legal or technical terminology</p></li></ul></li><li><p>Encourage users to report errors and feed these back into model refinement.</p></li></ul><p><strong>4. Educate Your Users</strong></p><p>A Tender Assistant seems intelligent and authoritative&#8212;but users must remember that LLMs can be confidently wrong.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>Users might trust AI outputs without verifying them, especially under deadline pressure.</p></li><li><p>Teams may assume the AI has legal or commercial authority.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Train users on:</p><ul><li><p>AI&#8217;s limitations</p></li><li><p>How to spot potential hallucinations</p></li><li><p>The need to treat outputs as drafts, not final answers</p></li></ul></li><li><p>Provide clear disclaimers on every AI-generated summary or recommendation.</p></li></ul><p><strong>5. Design Escape Routes</strong></p><p>Users need a way to handle errors gracefully instead of getting stuck with flawed outputs.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>Users may waste time editing unusable outputs instead of starting from scratch.</p></li><li><p>Errors might silently propagate if there&#8217;s no easy way to escalate issues.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Provide:</p><ul><li><p>&#8220;Regenerate&#8221; buttons for new attempts.</p></li><li><p>Clear feedback channels to flag problematic outputs.</p></li><li><p>Options to revert to manual workflows when outputs are unreliable.</p></li></ul></li><li><p>Make it simple to trace back outputs to specific document sections for quick verification.</p></li></ul><p><strong>6. Quantify Risk and Communicate Transparently</strong></p><p>Stakeholders must understand that while the Tender Assistant can save time, it&#8217;s not infallible.</p><p>Challenges in the Tender Assistant:</p><ul><li><p>Business leaders may overestimate the AI&#8217;s capabilities and push for higher automation than is safe.</p></li><li><p>Legal teams might worry about liability if outputs are used without checks.</p></li></ul><p>Applying the Strategy:</p><ul><li><p>Quantify:</p><ul><li><p>Average error rates in summaries</p></li><li><p>Frequency of hallucinations</p></li><li><p>Time saved versus risk exposure</p></li></ul></li><li><p>Communicate trade-offs clearly:</p><ul><li><p>&#8220;Using the Tender Assistant saves 60% of drafting time but requires mandatory human review to avoid compliance risks.&#8221;</p></li></ul></li><li><p>Be honest about what the AI can and cannot guarantee.</p></li></ul><h3><strong>Final Thoughts</strong></h3><p>The Tender Assistant example makes one thing clear: it takes significant effort to build an AI product that truly serves users&#8217; needs while managing the risks of wrong AI outputs in a timely and appropriate way.</p><p>I&#8217;ve become very cautious about which AI product ambitions are worth pursuing. Too often, we don&#8217;t fully see the hidden costs and side effects at the beginning. What looks like a potential million-euro opportunity can quickly require millions to build, fine-tune, and maintain. In the end, there might not be much value left on the bottom line&#8212;especially if the original business case was based on the wrong assumptions about efficiency gains.</p><p>If too much human oversight is required to validate AI outputs, that effort needs to be factored into the business case from the very start. Otherwise, we risk building products that look impressive but fail to deliver meaningful returns.</p><p>At the end of the day, a deliberate assessment, involving the right experts at the right time&#8212;and crucially, at the very beginning of each initiative&#8212;is absolutely essential.</p><p>Ultimately, the best AI products expect errors. And that&#8217;s exactly why they succeed.</p><p>JBK &#128330;&#65039; </p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#47 - AI Projects vs. AI Products]]></title><description><![CDATA[If you're serious about business value, you need AI Product Managers, not Project Managers.]]></description><link>https://www.jaserbk.com/p/ai-projects-vs-ai-products</link><guid isPermaLink="false">https://www.jaserbk.com/p/ai-projects-vs-ai-products</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 22 Jun 2025 09:01:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2uF9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong></p><p>An enterprise decides to &#8220;<em>do something with AI</em>.&#8221; A promising use case surfaces. A team is assembled. Budgets are allocated. Timelines are drawn. And someone asks, <em>&#8220;Who&#8217;s going to manage this project?</em>&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That question, on its own, isn&#8217;t wrong. </p><p>AI initiatives, like any other initiative, require direction and effective delivery. But too often, we try to solve the delivery challenge before we've truly understood the problem. We treat AI initiatives as if we already know what needs to be built, and we assume that whatever gets delivered will automatically be adopted once it&#8217;s rolled out.</p><p>But if we&#8217;re being honest, many of those promising AI use cases are framed by external consultants eager to sell their services, or by managers who may understand the business context but don&#8217;t have the time to sit down with real end users. They rarely investigate what needs to be solved first or how a solution should evolve with user feedback. <strong>The thinking starts and ends with solutions, not with adoption.</strong></p><p>And when you think in solutions, then yes, execution becomes your focus.</p><p>But the reality is this: building AI for internal use doesn&#8217;t fail because teams can&#8217;t execute. It fails, just like many technologies before it, because users don&#8217;t adopt what&#8217;s been built.</p><p>And when I talk about adoption, I don&#8217;t mean rollout. I mean real usage. I mean users solving their problems with your product, so that the business starts seeing real value in return.</p><p>Now, if you can answer the following question with a confident yes, feel free to skip the rest of this article:</p><div class="pullquote"><p><em>&#8220;Do you really know that what you&#8217;re building will be adopted?&#8221;</em></p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2uF9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2uF9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2uF9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2050032,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/166462012?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2uF9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!2uF9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19c2ead2-72a8-4d22-b496-94657353ea14_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>Why Project Thinking Doesn&#8217;t Work for Internal AI Initiatives</strong></h3><p>I see a project as a clearly defined mandate: deliver something based on predefined requirements, with a set start and end date, and within a fixed budget and resource plan.</p><p><em>That&#8217;s a project.</em></p><p>And yes, building AI systems can be handled <em>as if</em> they were projects. Someone defines what needs to be delivered, what resources are available, and by when it should be done. Then the execution begins.</p><p>But we also need to ask: <em>what is the deeper motivation behind delivering AI systems inside companies?</em></p><p>The main ambition &#8212; and it&#8217;s the core promise that&#8217;s fueled the AI hype &#8212; is simple. AI is expected to reduce or avoid cost, and ideally increase revenue. It&#8217;s positioned as a technology that can offer insights and generate content at scale, enabling companies to operate far more effectively and efficiently. So even though we may phrase AI use cases as technical solutions, they come with very real business expectations.</p><p>And here&#8217;s the tension: if the expectation is business value, then simply delivering an AI system is not enough. You may have executed successfully, but still failed to meet the expectation.</p><p>Project Management helps you deliver something. But it doesn&#8217;t guarantee that what you deliver will generate value. So the real question is: <em>why are so many companies still treating AI as if it were just another IT project?</em></p><p>If we take the business ambition seriously &#8212; if we believe that AI is meant to deliver measurable value &#8212; then we need to stop thinking in terms of <strong>projects</strong>. We need to think in terms of <strong>products</strong>. And just by using the word &#8220;<em>product</em>,&#8221; we already imply a responsibility toward sustainable, economic value creation. Projects, in contrast, are primarily measured by efficiency: <em>did we deliver on time and within scope?</em></p><p>Those are two entirely different goal systems.</p><p>When we treat internal AI solutions as projects, we imply that success is predictable. That if we define the scope clearly and execute well, value will follow. But if that were true, we&#8217;d all be billionaires by now. We would just need to execute AI projects efficiently, and the money would come.</p><p>Of course, reality tells us something else.</p><p>Startups are the perfect example. Some set up their entire organization, from team to product to operations, and still fail. They execute well, and yet they miss the target. Which proves something important: value is not created through flawless project execution. It is created through product-market fit, adoption, and constant iteration.</p><p>And the same is true for AI in companies.</p><p>Every internal AI initiative carries an economic ambition. But we can execute flawlessly and still fail, and many companies are now seeing exactly that. Even those who claim high adoption rates often admit they don&#8217;t see any impact on their bottom line.</p><p>That&#8217;s the real red flag.</p><p>Because if adoption is high, but the financial benefit is missing, something is off. And what that usually means is this: the solution may be valuable to users, but it is not valuable to the business.</p><p>This isn&#8217;t rare. In fact, it often traces back to the earliest phase of the initiative, where decisions are made about where to invest time and resources. Too often, those decisions aren&#8217;t grounded in a clear understanding of how value will actually show up in business terms.</p><p>So, if we want to increase the chances of creating value &#8212; not just for end users, but for the business as well &#8212; we need to change the way we approach AI initiatives. We need the right frameworks, the right mindsets, and the right operating models.</p><p>And in my experience, no successful internal AI initiative was ever truly a project. Some were labeled as such, but if you look closely, the ones that succeeded didn&#8217;t follow classical project patterns. They did everything necessary to ensure real impact, even if that meant going far beyond the original project scope.</p><p>The uncomfortable truth comes when those teams, having built something that works and delivers value, realize they&#8217;ve just taken the first step. Because once a &#8220;<em>project</em>&#8221; is delivered, the real work begins. Suddenly, it&#8217;s no longer a one-off project. It&#8217;s an ongoing responsibility.</p><div class="pullquote"><p><em>What do we call an ongoing project with no real end? </em></p><p>I&#8217;d like to call it a product ambition.</p></div><p>A product, in the best case, runs an indefinite number of projects &#8212; each designed to keep it relevant, effective, and trusted. And once you see AI solutions through that lens, the need to build a proper product organization becomes clear. Classical project delivery setups simply don&#8217;t support these ambitions.</p><p>In the worst case, companies make AI investments based on the assumption that they&#8217;ll pay a one-time delivery cost, without considering the long-term operational or maintenance costs. That&#8217;s how bad investment decisions are made. That&#8217;s how ROI expectations go unmet.</p><p>So yes, I&#8217;m not a believer in AI projects.</p><p>They&#8217;ve never delivered on the promises the AI industry continues to make.</p><p>Those promises require a deep shift: <strong>a real product operating model, a product-driven mindset, and people who know how to manage all of this in one cohesive effort.</strong></p><p>Only then can we build AI solutions that do more than just function.<br>They deliver value &#8212; and they continue to do so.</p><p>  </p><div><hr></div><h3>AI <strong>Project Management Still Matters</strong></h3><p>Now, just to be clear: I am not arguing that project management is useless in AI initiatives. Far from it.</p><p>Project management still matters &#8212; especially once the fog begins to clear.</p><p>Every product ambition, once it reaches a certain level of maturity, needs structure. It needs clear priorities, reliable timelines, and accountability. It needs someone to drive progress, manage complexity, align stakeholders, and make sure that what&#8217;s been decided actually gets delivered. In this sense, project management brings discipline to the chaos. It&#8217;s the engine room that ensures the ship keeps moving.</p><p>And if you&#8217;re a Product Manager, especially in the AI space, you&#8217;ll quickly realize this: <strong>at some point, you will need to act like a Project Manager</strong>  (and in agile contexts they might be named a Product Owner). Not because your title says so, but because the product demands it.</p><p>Once you&#8217;ve shaped the problem space, validated the key assumptions, and defined what success looks like, the focus shifts. You move from learning to delivering. And when that happens, it&#8217;s not strategic vision that gets the product over the line. It&#8217;s a clear execution.</p><p>That&#8217;s where the strengths of project management come in.<br>And the best AI Product Managers know how to borrow from that skillset when the time is right.</p><p>You need to plan without becoming rigid.<br>You need to track progress without turning into a micromanager.<br>You need to manage scope and expectations while still keeping your eye on long-term value.</p><p>It&#8217;s a balancing act &#8212; and the more mature your product becomes, the more of that balance is required.</p><p><em>But here&#8217;s the key difference:</em> as a Product Manager, you never fully <em>become</em> a Project Manager. You carry a different mandate. You are still responsible for the direction, for making sure the team is solving the right problem in the right way, for protecting the product from drifting into mere output.</p><p>So, yes, you adopt project management skills.<br>You just don&#8217;t lose your product mindset in the process.</p><p>In fact, I&#8217;d argue </p><div class="pullquote"><p>The strongest AI Product Managers are those who can switch seamlessly between strategy and execution. </p></div><p>They know when to step back and reframe the problem, and when to lean in and drive delivery. They don&#8217;t draw a hard line between &#8220;thinking&#8221; and &#8220;doing.&#8221; They understand that impact requires both.</p><p>This is especially important in internal AI product work, where the boundaries between roles often blur, and where the absence of clear product ownership makes it easy to fall into delivery for delivery&#8217;s sake.</p><p>So, rather than dismiss project management, let&#8217;s integrate its strengths.</p><p>Because once your strategy is in place and your product has a clear path forward, <strong>execution becomes the strategy</strong>.<br>And if you don&#8217;t own that execution, someone else will &#8212; and they might not be carrying the product's intent the way you do.</p><div><hr></div><h3><strong>What We Really Need to Acknowledge</strong></h3><p>If there&#8217;s one thing I wish more companies would admit out loud, it&#8217;s this:</p><p><strong>AI is not just a technical initiative. It&#8217;s a product journey.</strong></p><p>And as long as we treat it like a project, with fixed timelines, one-time delivery expectations, and predefined scopes, we will continue to fall short of its potential. Not because the teams aren't capable. But the framing is wrong from the start.</p><p>AI products live at the intersection of uncertainty, complexity, and change. They don&#8217;t just automate tasks. They reshape how people work. They challenge established processes. They require trust, behavior change, and new ways of measuring success. And that&#8217;s exactly why they need product thinking at the core.</p><p>The uncomfortable truth is that AI success doesn&#8217;t come from good project management alone. It comes from having people who know how to <strong>manage assumptions, not just resources</strong>.<br>People who understand how to balance strategic ambiguity with operational clarity.<br>People who can translate potential into progress, even when no one is quite sure what &#8220;done&#8221; really looks like.</p><p>And that means building your AI initiatives around Product Managers, not just Project Managers.<br>It means giving AI efforts the same treatment you&#8217;d give any real product: a clear strategy, ownership across the lifecycle, and an operating model designed for continuous discovery and delivery.</p><p>Because </p><div class="pullquote"><p>if you want AI to be more than a prototype or a solution  that no one uses, you need to treat it like something that lives. Something that evolves. Something that will never be &#8220;<em>finished</em>.&#8221;</p></div><p>And that, by definition, can&#8217;t be a project.</p><p>JBK &#128330;&#65039;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#46 - The First Language of AI Products Is Not AI]]></title><description><![CDATA[Why AI Product Managers must master product thinking before they master technology]]></description><link>https://www.jaserbk.com/p/the-first-language-of-ai-products</link><guid isPermaLink="false">https://www.jaserbk.com/p/the-first-language-of-ai-products</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 15 Jun 2025 09:11:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5fTa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p>Product thinking is the critical skill. AI expertise is the optimizing one. </p></div><p>You need both, but only one determines whether what you build will matter.</p><p>There&#8217;s a subtle tension at the heart of modern AI product management, the kind you only begin to sense when a seemingly brilliant solution fails to land. It&#8217;s the tension between what AI makes possible and what users actually need, between technological potential and human value.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In many organizations, especially those racing to &#8220;unlock&#8221; AI, the excitement around the technology overshadows the deeper work of understanding real problems, testing what matters, and designing for adoption. There&#8217;s an unspoken belief that mastering architecture, benchmarks, and models will somehow lead to a successful product. But this is rarely the case.</p><p>The inconvenient truth is that no matter how deeply you understand AI, or how advanced your model performance is, if you don&#8217;t understand product, you won&#8217;t make AI work in the real world.</p><p>This isn&#8217;t a personal failure. It&#8217;s a recurring pattern. We&#8217;ve seen it before across technological waves&#8212;mobile, cloud, APIs, data science. Each time, technical skills surged, but adoption stalled, not because the tools weren&#8217;t powerful, but because the work of integrating them into actual behaviors, decisions, and workflows was never done.</p><p>AI adds a particular challenge. It feels like intelligence. It creates the illusion that it will naturally solve problems, because it appears to understand them. But intelligence is not understanding. And model performance is not product-market fit.</p><p>Building an AI product is not the same as using AI in a product. The second is about integrating AI into an already established product, usually as a feature that supports or enhances something that already works. The first still has to prove itself. It must solve a real problem with AI as the core of the solution, not as an addition. That makes it more complex, more fragile, and more dependent on the quality of decisions made long before the first model is ever deployed.</p><p>And those decisions begin in product.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5fTa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5fTa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5fTa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2052731,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/165984838?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5fTa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!5fTa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7c3d20b-a77d-4dfe-a03e-954630012326_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3><strong>What does it mean to speak the language of product?</strong></h3><p>It means starting with what&#8217;s broken, not with what&#8217;s impressive. It means identifying real user friction, not just chasing interesting use cases. It means defining what success looks like before deciding what kind of model might support it. It also means building for change, not just for performance. And not to be forgotten, it means building something others would be willing to pay for, or something that creates meaningful value when used internally across the company.</p><p>Product thinking teaches you to look at systems over time, not just point solutions. It demands that you test assumptions, reframe features, and integrate feedback into the foundation of what you build. It helps you create value that doesn&#8217;t just show well in a demo, but actually holds up in real-world use.</p><p>AI, for all its sophistication, doesn&#8217;t know what matters to your users. It doesn&#8217;t know whether people will trust probabilistic outcomes, whether your solution fits a real decision path, or whether adoption will quietly vanish after week two. AI doesn&#8217;t care if the user goes back to Excel.</p><p>That&#8217;s why product thinking must come first.</p><h3><strong>If I Had to Choose, I&#8217;d Choose Product</strong></h3><p>When it comes to AI Product Management, you can separate the skills &#8212; but you can&#8217;t separate their impact. In an ideal world, your AI PM brings both product intuition and technical depth. But if I had to choose just one, I&#8217;d always pick strong product thinking over semi-technical AI experience. Every. Single. Time.</p><p>Because technology does not define the outcome. Product thinking does.</p><p>I&#8217;ve seen technically skilled PMs struggle to frame a clear problem, align stakeholders, or prioritize for adoption. I&#8217;ve also seen product-driven PMs with minimal AI experience deliver far more impact, simply because they asked better questions, focused on value, and brought the right people in when it mattered. The truth is, AI can be learned. Judgment can be learned too. But knowing which one matters more &#8212; and when &#8212; is where most people get it wrong.</p><p>That doesn&#8217;t mean AI knowledge isn&#8217;t important. It is. But AI is a tool, and knowing how to build it is only part of the story. Product thinking is what tells you whether the problem is worth solving, and whether the solution will actually stick.</p><div class="pullquote"><p>For Product Managers, product is the critical skill. AI is the optimizing one.</p></div><p>Without product thinking, you risk solving the wrong problem. Without AI knowledge, you risk solving the right problem too slowly or with unnecessary complexity. But if you have both, you make better decisions earlier. You shape discovery with technical realism. You speak your engineers&#8217; language. You avoid building for novelty and aim for scale.</p><p>You stop treating AI like magic. You treat it like leverage. And you stop asking, &#8220;What can the model do?&#8221; and start asking, &#8220;What is the pain we need to address?&#8221;</p><p>Because in the end, great AI products are not built by those who know the technology best. They&#8217;re built by those who know when and why to use it.</p><p>And that begins with something far older than machine learning.</p><p><strong>Lead with practical wisdom, not theoretical knowledge</strong></p><p>The ancient Greeks made a distinction between two forms of knowledge that still feels surprisingly relevant today. Episteme referred to theoretical knowledge &#8212; knowing facts, concepts, and systems. Phronesis meant practical wisdom &#8212; knowing how to act, how to decide, and how to navigate uncertainty in specific, real-world contexts.</p><p>AI skills belong to the world of episteme. They are grounded in logic, in structured knowledge, in models that can be measured and improved. Product thinking, on the other hand, lives in the world of phronesis. It requires you to make sense of ambiguity, to work with incomplete information, and to guide people and ideas toward value in an unpredictable environment.</p><div class="pullquote"><p>This is why product management, especially in the context of AI, is not just a function or a skillset. It is a philosophical discipline. It is the ongoing practice of deciding what matters, for whom, and why. It is the responsibility of shaping not only what gets built, but also how and to what end.</p></div><p>AI Product Management requires both kinds of knowledge. But it only works when guided by phronesis. When practical wisdom leads, and theoretical knowledge supports.</p><p>Either way, if you build what brings in value for the company by building things that add value for users, you have done your job.</p><p>JBK &#128330;&#65039; </p><p></p><p></p><div><hr></div><p><strong>&#129517; Continue your journey in AI Product Management</strong></p><p>If you&#8217;re serious about building the right foundation for AI Product Management, here are some essential reads from blog:</p><ul><li><p><a href="https://www.jaserbk.com/p/before-the-ai-product-theres-belief">Before the AI Product, There&#8217;s Belief</a><br>A personal reflection on why belief and trust are the invisible prerequisites of every internal AI product.</p></li><li><p><a href="https://www.jaserbk.com/p/why-ai-evaluations-have-never-been?r=2swyhe&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Why AI Evaluations Have Never Been Optional for AI Product Managers</a><br>A guide on how to treat evaluation as part of product thinking, not just as technical due diligence.</p></li><li><p><a href="https://www.jaserbk.com/p/most-ai-teams-ship-confidently-into?r=2swyhe&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Most AI Teams Ship Confidently Into the Void &#8211; Prototyping as Discovery</a><br>Why prototyping should be a tool for learning, not validation, in internal AI development.</p></li><li><p><a href="https://www.jaserbk.com/p/a-curriculum-for-ai-product-management">A Curriculum for AI Product Management</a><br>A comprehensive roadmap of skills, mindsets, and methods for internal AI Product Managers.</p></li><li><p><a href="https://www.jaserbk.com/p/too-technical-to-succeed">Too Technical to Succeed? &#8211; The Peer I Was Advising Was Me</a><br>An honest look at how over-relying on technical depth can derail product decisions.</p></li><li><p><a href="https://www.jaserbk.com/p/the-path-to-ai-product">The Path to AI Product</a><br>A conceptual journey from AI use case to a real, adopted AI product&#8212;with language and framing that stick.</p><div><hr></div></li></ul><p><strong>&#128257; Looking for another sharp voice in AI Product Management?</strong></p><p>I highly recommend <a href="https://aipmguru.substack.com/">AIPM Guru</a> by <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Shaili Guru&quot;,&quot;id&quot;:21946940,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d3d5e58-d233-4dba-8d33-4d8e615c1955_1080x1080.jpeg&quot;,&quot;uuid&quot;:&quot;818bf2ff-5778-4285-af24-88a0818d17b1&quot;}" data-component-name="MentionToDOM"></span>. With hands-on experience leading internal AI product development at large organizations like Amazon, Shaili brings a grounded perspective that balances product discipline with technical depth. She writes about stakeholder alignment, AI lifecycle management, model evaluation, and how to frame AI work in product terms&#8212;topics often ignored in overly technical discourse.</p><p>Here are a few of her must-reads:</p><ul><li><p><a href="https://aipmguru.substack.com/p/the-future-of-ai-product-management">The Future of AI Product Management</a><br>A look at how multi-agent systems, adaptive models, and human-machine collaboration will reshape the role.</p></li><li><p><a href="https://aipmguru.substack.com/p/ai-basics-what-it-actually-is-and">AI Basics: What It Actually Is (And Isn&#8217;t)</a><br>A crisp explanation of what AI truly is, and how PMs can stay clear-eyed about its use.</p></li><li><p><a href="https://open.substack.com/pub/aipmguru/p/can-you-prove-your-ai-moat?r=2swyhe&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Can You Prove Your AI Moat?</a><br>A thought-provoking piece on measuring the defensibility of AI features in product contexts.</p></li><li><p><a href="https://open.substack.com/pub/aipmguru/p/from-crisp-dm-to-crisp-gen-ai-what?r=2swyhe&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">CRISP-ML: A Framework for AI Product Managers</a><br>Her 8-part series laying out a comprehensive, repeatable framework for managing AI product delivery from start to finish.</p></li></ul><p>Follow her at <a href="https://aipmguru.substack.com/">aipmguru.substack.com</a> if you want to sharpen your practice and think more strategically about your role as an AI PM.</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#45 - Still Using Scrum for AI Product? That’s Your First Problem.]]></title><description><![CDATA[It&#8217;s time to stop copying delivery models and start building your own.]]></description><link>https://www.jaserbk.com/p/still-using-scrum-for-ai-product</link><guid isPermaLink="false">https://www.jaserbk.com/p/still-using-scrum-for-ai-product</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 01 Jun 2025 10:38:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Rmho!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong></p><p>We love structure. We crave it, especially in large organizations. It gives us something to hold onto in the chaos of delivery deadlines, shifting priorities, and internal expectations. And when we set out to build AI products, we often reach for what looks familiar: a framework, a template, a delivery method someone else has already tested. Scrum. SAFe. The Spotify model. All tried-and-true in some context, all battle-tested, but not in our context. And that&#8217;s the point.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rmho!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rmho!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rmho!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2055092,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/164924520?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rmho!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Rmho!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2afed60f-d444-4cb4-a584-016fd0eebbdc_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Why Predefined Structures Don&#8217;t Work</strong></h2><p>When we import a predefined delivery structure into our teams, we&#8217;re implicitly making a dangerous assumption: that our product context is just like someone else&#8217;s. That their constraints are our constraints. That their user reality is our user reality.</p><p>But internal AI products rarely start with clarity. They start with ambiguity. We don&#8217;t just have a vague problem. We often don&#8217;t know if there is a real problem yet. We&#8217;re not scaling a known solution. We&#8217;re trying to discover whether something is even worth solving. And that discovery isn&#8217;t just about the product. It&#8217;s also about the delivery itself.</p><p>How can we assume that any rigid delivery structure will fit a process that is, by definition, still unfolding?</p><h2><strong>Different AI Products Need Different Ways of Working</strong></h2><p>Even within the same organization, AI product teams are solving wildly different problems under wildly different conditions. One team might be building a foundational NLP model from scratch, experimenting with new architectures and custom datasets, optimizing for training efficiency, and managing GPU infrastructure. Another team might be wrapping a pre-trained foundation model with internal data, plugging it into an existing workflow, and focusing more on prompt engineering, integration logic, and user testing.</p><p>Both are AI products. Both exist under the same company umbrella. But their delivery reality couldn&#8217;t be more different.</p><p>Now add in other variations: Some AI products are meant to automate existing processes. Others aim to augment decision-making. Some are critical infrastructure. Others are lightweight experiments. Some require explainability and compliance from day one. Others can live as internal alpha tools for months.</p><p>And yet, we often try to apply the same delivery expectations &#8212; the same sprint cadence, the same roadmap templates, the same governance checkpoints &#8212; across all of them.</p><p>It&#8217;s like assuming that just because two products both use AI, they should be built the same way. But would we expect the same delivery approach for building a backend API and designing a customer-facing app? Probably not. So why do we treat all AI initiatives as if they share the same DNA?</p><p>The truth is: the nature of the problem, the data, the user type, and the maturity level all shape how a team needs to work. And so should our expectations of delivery, structure, and success criteria.</p><p>That&#8217;s why rigid delivery setups fail. Not because they&#8217;re bad, but because they assume a level of uniformity that simply doesn&#8217;t exist.</p><h3><strong>Product Discovery and Delivery Shape Each Other</strong></h3><p>Here&#8217;s a more realistic truth: we discover the problem and the way we work at the same time.</p><p>With every iteration of learning &#8212; from stakeholder interviews, user feedback, model behavior, or data friction &#8212; we not only understand the problem better, but also discover how we need to talk about the problem, how fast we can move, what kind of skills we need, who needs to be involved, and what kind of delivery rhythm fits the reality we&#8217;re in.</p><p>Our delivery structure is not something we apply to the work. It&#8217;s something that emerges with the work.</p><p>This doesn&#8217;t mean we allow anarchy. But it also doesn&#8217;t mean strict control is the answer. The goal isn&#8217;t to copy structure. It&#8217;s to design for emergence with just enough clarity to align, and just enough freedom to adapt.</p><h2><strong>Governance Demands Flexibility, Not Uniformity</strong></h2><p>Some argue that internal teams need structure because they operate within a governed environment &#8212; and they&#8217;re right. Enterprise AI product teams often navigate a web of rules: legal, compliance, data privacy, security, ethics, model monitoring. And those requirements aren&#8217;t optional.</p><p>But here&#8217;s the catch: governance is not one thing. It varies, sometimes dramatically. One company&#8217;s governance framework is light-touch and decentralized. Another&#8217;s requires seven-step approval chains and model risk committees. In some places, governance differs per business unit. In others, it&#8217;s even more granular, with each division applying its own criteria for validation, sign-off, or integration.</p><p>This means that even if two teams are building similar AI products, their delivery realities will diverge. Not because the product demands it. But because governance does.</p><p>So we&#8217;re not just dealing with product diversity. We&#8217;re dealing with governance diversity, and that reinforces the need for custom delivery setups, tailored to both the product ambition and the regulatory environment it lives in.</p><p>The mistake is thinking that a rigid delivery model will simplify governance. In reality, it often causes friction, because the model wasn&#8217;t designed to speak the language of that specific governance body or to deliver the specific artifacts required for that context.</p><p>And the truth is: you can meet governance expectations without enforcing delivery uniformity. You can build in traceability, risk assessments, ethical reviews, model explainability, and deployment guardrails without forcing teams to plan, iterate, or communicate according to a model that was never theirs to begin with.</p><p>Governance doesn&#8217;t require rigidity. It requires accountability. And accountability works best when teams are given the space to define how they meet the right outcomes, not forced into a structure that doesn&#8217;t reflect the complexity they&#8217;re actually working with.</p><p>And while we&#8217;re here, it&#8217;s worth stating the obvious: we should always look for ways to streamline governance so that it enables, rather than blocks, time to market. That&#8217;s another discussion entirely. But one worth having.</p><p>What matters most at the start is this: it&#8217;s better to begin with bad governance than to wait for perfect governance to exist. Because speed matters. And learning beats delay every time.</p><h2><strong>Use Frameworks as Inspiration, Not Instructions</strong></h2><p>This doesn&#8217;t mean we shouldn&#8217;t learn from frameworks like Scrum, SAFe, or the Spotify model. But we need to treat them as inspiration, not instruction.</p><p>Take Scrum, for example. It says: <em>deliver in small iterations and stay close to your users. Build a cross-functional team that owns the full delivery loop. </em>These are healthy principles. But what if you&#8217;re building an internal AI service that feeds into five other teams? Or your users are data governance officers, not end users with a UI?</p><p>You can&#8217;t just run sprints and expect value to emerge. In these settings, understanding the problem landscape and aligning across stakeholder groups takes more time and flexibility than Scrum rituals allow.</p><p>Then there&#8217;s SAFe &#8212; the Scaled Agile Framework. It promotes alignment through Program Increments and a central Agile Release Train. It defines strong role clarity between Product Management, System Architects, and Business Owners. This can work well (to be proven) in large-scale, regulated industries with lots of dependencies. But for internal AI teams exploring whether a GenAI assistant can even solve something meaningful? You don&#8217;t need a Release Train. You need a small crew, fast feedback, and the ability to ditch the project if the hypothesis breaks. SAFe is built for predictability and scale, not for discovery under uncertainty.</p><p>And then there&#8217;s the Spotify model. Famous for its Squads, Tribes, Chapters, and Guilds &#8212; often admired for its emphasis on team autonomy and cultural coherence. But here&#8217;s the thing: even Spotify itself has said that what became known as &#8220;the Spotify model&#8221; was just a snapshot, not a blueprint. Henrik Kniberg &#8212; who helped describe the model &#8212; later clarified that <em>The Spotify model doesn&#8217;t even exist. It was a snapshot in time of how we worked.</em></p><p>Ironically, the model became more famous outside of Spotify than inside. Many companies adopted the vocabulary &#8212; Squads, Tribes, Guilds &#8212; but not the thinking that shaped it: constant adaptation, context over control, culture before structure.</p><p>The danger isn&#8217;t in using ideas from these frameworks. The danger is copying the form without the function.</p><p>Scrum says: <em>ship fast</em>.</p><p>SAFe says: <em>align big.</em></p><p>Spotify said: <em>empower the team.</em></p><p>All good thoughts. But none of them should be assumed to fit by default.</p><h2><strong>You&#8217;re Discovering the Product and the Process</strong></h2><p>This is the heart of it.</p><p>When we build AI products internally, we&#8217;re discovering two things at once: the problem we should solve, and the way of working that lets us solve it effectively in this environment.</p><p>And just like our understanding of the problem changes, our delivery setup must change too.</p><p>Maybe we start with fast prototyping. Then we realize we need more data governance. Maybe we start with a team of two, then bring in security and operations. Maybe we start with async feedback, then shift to daily touchpoints once we hit real complexity.</p><p>There&#8217;s no shame in that. It&#8217;s not chaos. It&#8217;s adaptation.</p><p>Even once we&#8217;ve found a delivery rhythm that works &#8212; one that fits our team size, product scope, data needs, and governance landscape &#8212; we should assume it will need to change again.</p><p>Teams change. Tools evolve. The organization reorganizes. AI capabilities shift. And with it, the way we work must shift, too.</p><p>That&#8217;s not a sign of failure. It&#8217;s a sign of life.</p><p>We wouldn&#8217;t build every product the same way. So why should we deliver them the same way?</p><p>The smartest thing we can do for our teams isn&#8217;t to give them a rigid framework. It&#8217;s to give them the freedom to discover their way of working with our guidance &#8212; just like they&#8217;re discovering the product itself and getting our feedbacks.</p><p>Not in anarchy. Not in chaos. But in conscious evolution, grounded in context.</p><p>JBK &#128330;&#65039; </p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#44 - Your AI Product Doesn’t Need a Data Scientist]]></title><description><![CDATA[And when your AI Product is nothing without one]]></description><link>https://www.jaserbk.com/p/your-ai-product-doesnt-need-a-data</link><guid isPermaLink="false">https://www.jaserbk.com/p/your-ai-product-doesnt-need-a-data</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 25 May 2025 14:16:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6_Qu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is the first in a series that explores the different roles needed to build successful internal AI products. We will look at what each role contributes, what they do not, and when they truly belong on the core team. In AI product development, assembling the wrong team is not just inefficient. It is expensive, misleading, and one of the most common reasons AI initiatives fall short. And it happens more often than we like to admit.</p><p>Especially in corporate settings, where roles are staffed because someone is available, or because someone said, &#8220;AI needs a Data Scientist.&#8221; But that is the wrong lens. AI does not necessarily need a Data Scientist. A specific type of AI product does. And unless AI Product Managers can tell the difference, we will either over-engineer simple use cases or under-staff the hard ones. That is why it is essential to understand what each role really brings, and when they are critical to delivery, to trust, or to both.</p><p>Let&#8217;s start with one of the most misunderstood ones: the Data Scientist.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6_Qu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6_Qu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6_Qu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2086716,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/164403207?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6_Qu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!6_Qu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00eefc4b-3af1-4f98-af8d-24d1c652ac6e_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The general role of a Data Scientist: What they&#8217;re trained to do</strong></h2><p>A Data Scientist&#8217;s job is not just to &#8220;do AI&#8220;. It is to make sense of data &#8212; often messy, often incomplete, often overwhelming &#8212; and turn it into something you can act on. That might mean discovering new insights from customer behavior. It might mean building a model that predicts what users will do next. Or it might mean designing experiments to understand why a system behaves the way it does.</p><p>Their toolkit includes:</p><ul><li><p>Statistical modeling and hypothesis testing</p></li><li><p>Exploratory data analysis (EDA)</p></li><li><p>Feature engineering and variable selection</p></li><li><p>Machine learning algorithm development and tuning</p></li><li><p>Model evaluation and validation</p></li><li><p>Interpretability and fairness techniques</p></li></ul><p>They combine math, code, and critical thinking to move from data to decision logic. But that does not mean every team building with AI needs them.</p><h3><strong>What a Data Scientist actually contributes to internal AI products</strong></h3><p>Inside a company, building AI products is not just a technical challenge. It is a business challenge shaped by organizational complexity. Data is spread across silos. Expectations are vague. Timelines are tight. And trust is everything. In this context, the Data Scientist brings a set of capabilities no other role quite covers.</p><p><strong>1. They bridge messy data and model-ready inputs</strong></p><p>Internal data is rarely clean. It comes from legacy systems, manual processes, and diverse domains. A Data Scientist does not just take what&#8217;s given. They question it. They ask:</p><ul><li><p><em>What does this column really mean?</em></p></li><li><p><em>Is this bias or signal?</em></p></li><li><p><em>Are we modeling the right problem or just what is easiest to compute?</em></p></li></ul><p>Their ability to structure, clean, and translate that data is foundational. Because if you do not start from solid ground, no AI model, no matter how sophisticated, will deliver the impact you are looking for.</p><p><strong>2. They build models tailored to internal realities</strong></p><p>Most off-the-shelf models are trained on public datasets. But internal problems often require internal logic. Whether it is scoring leads based on sales history, predicting churn for B2B accounts, or classifying support tickets with enterprise-level nuance, these are not problems you solve with generic APIs. Data Scientists build models that reflect:</p><ul><li><p>Internal processes</p></li><li><p>Business rules</p></li><li><p>Customer behaviors specific to your organization</p></li></ul><p><strong>3. They protect your product&#8217;s credibility</strong></p><p>An AI solution can fail even if it works &#8212; if no one trusts it. That is where a Data Scientist makes a quiet but critical difference. They:</p><ul><li><p>Test for fairness</p></li><li><p>Quantify uncertainty</p></li><li><p>Simulate how the model behaves under different conditions</p></li><li><p>Help stakeholders interpret the model&#8217;s decisions</p></li></ul><p>In internal environments, where decisions affect teams, budgets, and customers, trust is essential. Without explainability, adoption stalls.</p><p><strong>4. They turn AI discovery into AI strategy</strong></p><p>Even before you have a clear product, you often have data. And before you know what to build, you need to understand what is happening beneath the surface. A Data Scientist helps you explore patterns, test early hypotheses, and generate strategic questions like:</p><ul><li><p><em>Which customers are most affected by late delivery?</em></p></li><li><p><em>What behaviors lead to high support ticket volume?</em></p></li><li><p><em>Is there a predictable signal before a churn event?</em></p></li></ul><p>They do not just help you build the product. They help you define what should be built.</p><h2><strong>When you need a Data Scientist on your core team</strong></h2><p>You do not need a Data Scientist for every AI initiative. But when you do, no other role can replace them. Here is when their presence goes from nice to have to essential.</p><h3><strong>You&#8217;re developing your own model</strong></h3><p>If your AI product involves training a custom model &#8212; for churn prediction, time-series forecasting, NLP classification, or anything similar &#8212; you need a Data Scientist. Even if you are fine-tuning an existing one, you will need their help with:</p><ul><li><p>Feature engineering</p></li><li><p>Hyperparameter tuning</p></li><li><p>Model validation and selection</p></li><li><p>Performance benchmarking</p></li></ul><p>Without this expertise, you are making educated guesses. And guessing with business-critical data is risky.</p><h3><strong>Your data is complex or proprietary</strong></h3><p>Internal data is rarely plug-and-play. It is fragmented, shaped by human behavior, and full of edge cases. A Data Scientist can:</p><ul><li><p>Identify biases or data gaps</p></li><li><p>Select the right features and formats</p></li><li><p>Handle imbalance or missing values</p></li><li><p>Engineer variables that reflect actual business logic</p></li></ul><p>This becomes essential when internal systems, roles, and processes do not follow clean structures.</p><h3><strong>You&#8217;re building products for insight, not just automation</strong></h3><p>Many internal AI products are not built to make decisions. They are built to uncover insights. These are discovery or sense-making use cases, and they need Data Scientists to make the results meaningful.</p><p>Examples include:</p><ul><li><p>Customer segmentation models</p></li><li><p>Behavioral clustering (e.g., app usage or sales rep activity)</p></li><li><p>Root cause analysis of KPI trends</p></li><li><p>Pattern or trend detection in network data</p></li><li><p>Journey mapping based on events or actions</p></li></ul><p>Without a Data Scientist, these products remain shallow. With one, they generate value across teams.</p><h3><strong>Your product must be explainable and auditable</strong></h3><p>In regulated industries, or anywhere that decisions must be reviewed or audited, transparency is non-negotiable. A Data Scientist ensures:</p><ul><li><p>Traceable logic and clear decision paths</p></li><li><p>Documentation of how predictions are made</p></li><li><p>Use of techniques like SHAP or LIME</p></li><li><p>Monitoring for fairness, accuracy, and drift</p></li></ul><p>This is not just for compliance. It builds confidence.</p><h3><strong>You&#8217;re in a discovery-heavy phase</strong></h3><p>Some products begin not with specifications, but with questions. In this case, a Data Scientist is the person who helps test feasibility and shape hypotheses. They can:</p><ul><li><p>Analyze opportunity areas</p></li><li><p>Simulate outcomes</p></li><li><p>Estimate model performance</p></li><li><p>Clarify whether the use case is worth building</p></li></ul><p>They help turn ambiguity into informed direction.</p><h2><strong>When you don&#8217;t need a Data Scientist on your core team</strong></h2><p>Not every machine learning&#8211;powered product requires a Data Scientist from the start. In some cases, their core skill set &#8212; model development, data exploration, statistical reasoning &#8212; may not be critical to delivering value early on. Here&#8217;s when that&#8217;s likely to be the case.</p><h3><strong>You are using foundation models via API, without custom training</strong></h3><p>If your AI product relies on foundation models (like GPT, Claude, or Gemini) for capabilities such as summarization, semantic search, classification, or generation, and you are not fine-tuning or training on your own data, then a Data Scientist is not immediately necessary. What you are doing is orchestration and application, not model innovation.</p><p>You&#8217;ll likely need:</p><ul><li><p>A Prompt Engineer (if this is even a role) to structure interactions</p></li><li><p>An AI Engineer to handle retrieval or context enrichment</p></li><li><p>A Designer to build usable workflows on top of the model</p></li></ul><p>Until you reach a point where you need to analyze performance or fine-tune with internal data, a Data Scientist would have little to contribute.</p><h3><strong>You are using pre-trained models for narrow ML tasks</strong></h3><p>Some internal products embed pre-trained models that perform a very specific ML task &#8212; like image classification, sentiment analysis, or language detection. If these models perform well enough and do not require retraining, you can build valuable products around them without needing a Data Scientist.</p><p>Examples:</p><ul><li><p>Using an email sentiment classifier to route internal tickets</p></li><li><p>Applying an OCR model to extract structured data from documents</p></li><li><p>Leveraging a pre-trained keyword extractor to tag customer interactions</p></li></ul><p>The model is already built. Your work is around productization, not modeling. Similar to the foundation model case above.</p><h3><strong>You are still validating the problem and don&#8217;t need model development yet</strong></h3><p>If your AI product is still in the early discovery phase, and your main question is whether the problem is real, valuable, and solvable with ML, then you may not need a Data Scientist immediately, if you can derive the answer without heavy data analysis. What you then need first is:</p><ul><li><p>A clear problem framing</p></li><li><p>An understanding of available data sources</p></li><li><p>A simple prototype to test workflows or model fit</p></li></ul><p>You might work with an AI Engineer or generalist to explore feasibility using basic, Non-ml models or prebuilt tools. A Data Scientist can come in once the product vision matures.</p><h3><strong>You need model orchestration, not model creation</strong></h3><p>Many internal AI products rely on combining multiple ML components &#8212; retrieval, embedding search, pre-trained classification &#8212; but do not involve building or training new models. The complexity lies in gluing pieces together, not in discovering patterns.</p><p>Common examples:</p><ul><li><p>Retrieval-augmented generation (RAG) for internal knowledge assistants</p></li><li><p>Multi-step workflows using off-the-shelf models</p></li><li><p>Semantic search powered by vector embeddings</p></li></ul><p>These products still use ML, but they are integration-heavy, not data science&#8211;driven.</p><h3><strong>Your evaluation focus is on business metrics, not model performance</strong></h3><p>If your current goal is to test whether the product drives business value or user adoption, rather than tuning model performance, a Data Scientist is not the most urgent role. You may be testing:</p><ul><li><p>Whether users trust the AI assistant</p></li><li><p>Whether recommendations improve outcomes</p></li><li><p>Whether response time or accuracy meets baseline needs</p></li></ul><p>Until model quality becomes the limiting factor, other roles will move the product forward more effectively.</p><h2>What I&#8217;ve Learned From Building Internal AI Products</h2><p>We often debate roles. But the more honest question is: <em>What does the product need to succeed?</em></p><p>The title &#8220;Data Scientist&#8221; may sound generic. But their work is anything but. They are not just model builders. They are pattern finders, uncertainty reducers, and sometimes the only person in the room who understands whether your assumptions are statistically sound. When the product needs that, they are the right person. When it doesn&#8217;t, they&#8217;re not.</p><p>The goal isn&#8217;t to staff roles based on trends.</p><p>The goal is to solve the right problem with the right capabilities.</p><h3><strong>Most AI Products Today Don&#8217;t Need a Data Scientist&#8212;And That&#8217;s Not an Insult</strong></h3><p>We are in a wave of AI product launches. Internally. Externally. Everywhere. But most of them share a pattern. They are:</p><ul><li><p>Wrappers around foundation models</p></li><li><p>RAG-based assistants powered by vector search</p></li><li><p>Applications using off-the-shelf models for classification or summarization</p></li></ul><p>They are products built on top of pre-trained intelligence, not new intelligence developed from scratch. Which means they succeed or fail based on:</p><ul><li><p>Prompt design</p></li><li><p>Workflow orchestration</p></li><li><p>User experience</p></li><li><p>Data integration and context</p></li></ul><p>In this context, a Data Scientist is often not the limiting factor. The real blockers are adoption, alignment, or usability. So no&#8212;most AI products today don&#8217;t need a Data Scientist. They need strong engineers, great designers, and clear product thinking.</p><p>But that&#8217;s not the end of the story.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_fQH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_fQH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_fQH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2040500,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/164403207?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_fQH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!_fQH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F231ca992-11ef-4633-806e-4e1c4d1b97b1_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Hidden Strength of the Misunderstood Role</strong></h3><p>For years, Data Scientists were treated as the &#8220;unicorns&#8221; of AI. Everyone wanted one. Few knew what they were actually supposed to do. Expectations were sky-high: build the model, explain it, deploy it, make it scalable, make it usable, make it compliant&#8212;and do it alone.</p><p>As a result, many Data Scientists developed much broader skills than their job title suggests.</p><p>You&#8217;ll often find Data Scientists who:</p><ul><li><p>Write production-grade code</p></li><li><p>Design evaluation pipelines</p></li><li><p>Build dashboards and reporting layers</p></li><li><p>Tune prompts and experiment with LLM-based architectures</p></li><li><p>Manage experiments or run early-stage product discovery</p></li></ul><p>So before we decide whether a Data Scientist belongs on the team, we should ask a more nuanced question: <em>Can this person cover part of what we need, even if their title says otherwise?</em></p><p>Not every company has all the roles they need on paper. But sometimes, they already have someone who can fill the gap&#8212;quietly, competently, and creatively.</p><p>We rarely get to staff ideal teams.</p><p>But with the right awareness and flexibility, we can still build the right products.</p><p>JBK &#128330;&#65039;</p><p></p><div><hr></div><p>If you found this useful or want to share your approach to building internal AI teams, let&#8217;s talk. I&#8217;d love to hear what you&#8217;ve tried, what worked, and what didn&#8217;t.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#43 - The Curriculum I Wish I Had as an Internal AI Product Manager ]]></title><description><![CDATA[And why I believe we should build it together]]></description><link>https://www.jaserbk.com/p/the-curriculum-i-wish-i-had-as-an</link><guid isPermaLink="false">https://www.jaserbk.com/p/the-curriculum-i-wish-i-had-as-an</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 18 May 2025 07:52:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JwAf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI</strong> </p><p>A year ago, I wrote about why we need a curated learning repository for AI Product Managers. Today, I want to show you how we can build it together.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8a45b41b-586b-456d-b920-8cc680796ae9&quot;,&quot;caption&quot;:&quot;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why We Need a Curated Learning Repository for AI Product Managers&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-04-25T10:30:57.415Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F026519c6-8324-4d79-ad0e-8a305c215369_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/why-we-need-a-curated-learning-repository&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:143889272,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:7,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Over the past year, I&#8217;ve written dozens of articles on internal AI Product Management and read at least twice as many from others doing the same. Topics range from shadow governance to MVP value measurement, from cross-functional buy-in to the traps of over-promising AI capabilities.</p><p>But here's the challenge: <em>all that wisdom is scattered.</em></p><p>Some of it sits in blog posts. <br>Some in LinkedIn threads. <br>Some in decks you&#8217;ll never see.</p><p>What&#8217;s missing is a place that pulls this knowledge together. Not into a theoretical textbook, but into a learning path grounded in practice.</p><p>So I created a structure for it. A curriculum.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JwAf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JwAf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JwAf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2051594,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/163836496?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JwAf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!JwAf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9efe592-5f4d-40bd-8ec1-572ec8799872_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3>The Curriculum I Wish Had Existed</h3><p>A few months ago, I published <strong>A Curriculum for Internal AI Product Management</strong>. It&#8217;s designed like an M.Sc.-level program, but with one big difference: it&#8217;s for internal AI Product Managers. The ones navigating the complexity of building and scaling AI solutions inside large organizations. Not on a greenfield. Not in startups. But deep inside orgs with legacy tech, process silos, conflicting goals, and real impact on the line.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9dba7c78-e610-4452-8e72-e7abf8453c07&quot;,&quot;caption&quot;:&quot;Listen to this AI-generated podcast discussing the curriculum. Maybe you get even more inspired and intrigued to go through the entire article!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;A Curriculum for AI Product Management&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-10-20T09:42:32.212Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ce1616c-fb6f-411a-8341-eda080ef5810_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/a-curriculum-for-ai-product-management&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:150467982,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:3,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>The curriculum covers topics like problem discovery in internal teams, AI solution prototyping and rollout, adoption metrics beyond usage, compliance, explainability, governance, and collaboration with data science, legal, and business units.</p><p>It&#8217;s a map, not a manual. And like every good map, it becomes more useful the more people contribute to it.</p><h3>Why AI PMs Should Lead This</h3><p>AI Product Managers are in a unique position to lead this kind of learning effort because they sit at the intersection of multiple disciplines. They work across engineering, data science, business, legal, operations, and compliance, often within the same week, sometimes within the same meeting.</p><p>That vantage point gives them an unusually broad perspective. Unlike specialists who go deep in one field, AI PMs constantly connect dots between teams, technologies, processes, and real-world problems. They translate between mindsets, resolve contradictions, and shape solutions that are not only technically sound but also usable, trustworthy, and aligned with business needs.</p><p>This role forces them to see what&#8217;s missing. It forces them to ask the uncomfortable questions others avoid. It also gives them visibility into what knowledge gaps slow down progress, where alignment breaks down, and what kinds of insights actually move things forward.</p><p>So when it comes to building a curated learning path, one that reflects the realities of shipping AI inside complex organizations, AI PMs are not just participants. They&#8217;re the ones best positioned to lead.</p><p>They know what&#8217;s essential, what&#8217;s overhyped, and what truly helps teams deliver impact.</p><h3>How You Can Contribute</h3><p>I&#8217;m now opening up this curriculum to contributions from anyone working in or around internal AI product delivery.</p><p>There are three easy ways to get involved:</p><h4>1. Match your content to a course</h4><p>Each class in the curriculum includes a short description. Based on that, you can send me your articles, articles you&#8217;ve read and recommend, talks, tutorials, or even short courses. Just let me know which class your contribution fits best. If it aligns, I&#8217;ll link it directly from the curriculum and credit your work. This way, learners not only follow a structured path but also benefit from real-world perspectives.</p><h4>2. Suggest a new course</h4><p>This curriculum reflects how I see internal AI Product Management, but it&#8217;s not set in stone. If you think there&#8217;s a course missing, whether it&#8217;s a topic you&#8217;ve struggled with, something you wish you had learned earlier, or an area you teach often, let me know. If it&#8217;s helpful for internal AI PMs, I&#8217;m open to adding it.</p><h4>3. Propose an elective</h4><p>The curriculum includes elective modules for areas like new technologies, frameworks, or methods that are still evolving. If you&#8217;ve developed expertise in something like enterprise AI agents, prompt ops, data-centric evaluation, or domain-specific architectures, you can propose a new elective. You can even co-create it with me.</p><h3>How to Contact Me</h3><p>I regularly share my thoughts on <a href="http://www.linkedin.com/in/jaserbk">LinkedIn </a>and on <a href="https://www.jaserbk.com/">Substack</a>, that&#8217;s where you&#8217;ll find my articles, reflections, and frameworks on internal AI Product Management. If you want to contribute to the curriculum or share a relevant piece of work, the best way to reach me is <strong>via the comments</strong> under my posts or articles.</p><p>I'm not very active in DMs. Not because I don&#8217;t value the input, but because there&#8217;s simply too much to respond to one-on-one. Everything I&#8217;m building here happens next to my full-time role as a Lead AI Product Manager and Strategist, and as a Co-Founder of the AI Center of Excellence at Vodafone. And beyond work, there&#8217;s also a personal life I care deeply about and try to protect.</p><p>So if you want to send me a suggestion - whether it&#8217;s an article, a new course idea, or a contribution to one of the electives - please <strong>comment under my latest LinkedIn post or Substack article, no matter which one</strong>. That&#8217;s where I&#8217;m most likely to see it and respond. Even though notifications are off, I check the comments regularly because they&#8217;re part of my publishing routine.</p><p>I know it&#8217;s not the most convenient setup. But it helps me stay focused - both in my work and outside of it.</p><p>Maybe one day I&#8217;ll find a better way. Let&#8217;s see.</p><h3><strong>PDF Download: A Visual Overview of the Curriculum</strong></h3><p>To make things easier to explore and share, I&#8217;ve also created a <strong>PDF version of the curriculum</strong>. It includes:</p><ul><li><p>An overview of all semesters and course modules</p></li><li><p>Core and elective topics</p></li><li><p>A clear structure that reflects the internal AI PM journey</p></li></ul><p>I&#8217;ll be sharing it on LinkedIn soon, so feel free to download, share, or reference it as you explore the curriculum or think about contributing.</p><h3>Help Spread the Word</h3><p>If you believe in the value of this curriculum and the idea of a curated learning path for internal AI PMs, here&#8217;s how you can help it grow:</p><p><strong>1. Share the article.</strong> Share it with your network, especially with people working in AI, product, data, or digital transformation roles inside large organizations.</p><p><strong>2. Tag someone.</strong> Know someone who&#8217;s written something brilliant? Tag them. One mention can bring in a whole new module.</p><p><strong>3. Leave a comment.</strong> Even a short note saying &#8220;this is needed&#8221; helps others take notice. And if you disagree with something or have ideas for improvement, even better.</p><p><strong>4. Add your voice.</strong> Send me your article, video, course, or story and tell me which class it fits. Or propose a new one. I&#8217;ll take care of linking it where it belongs.</p><p>The more we co-create this, the more useful it becomes. Not just for learners, but for all of us trying to make AI work where it matters most.</p><p>JBK &#128330;&#65039;</p><p></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#42 - How to Iterate an AI Product — From 0.1 to 1.0 and Beyond]]></title><description><![CDATA[Value Step Iteration &#8212; a method for guiding AI product iteration by supporting high-effort steps in expert workflows, one step at a time.]]></description><link>https://www.jaserbk.com/p/how-to-iterate-an-ai-product-from</link><guid isPermaLink="false">https://www.jaserbk.com/p/how-to-iterate-an-ai-product-from</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 11 May 2025 12:05:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qmr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="pullquote"><p>Building AI products isn&#8217;t hard anymore. <br>That is, if we confuse <em>AI products</em> with <em>AI systems</em>.</p></div><p><strong>#beyondAI</strong></p><p>Not long ago, the primary difficulty in AI was the model and the entire system around it. Designing, training, and deploying machine learning models at scale required specialized infrastructure and niche expertise. But with the rise of GenAI, not only were new capabilities introduced, but the arrival of foundation models shifted much of that difficulty away from internal product teams. Today, the most advanced language, vision, and reasoning capabilities can be accessed through a simple API, maintained by a handful of vendors who have taken on the challenge of scaling intelligence as a service.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>So yes, one could argue that building something with AI has never been easier. But building something with AI that truly deserves to be called a product is still as hard as ever. A real product is something that earns repeated usage, delivers sustainable value, and justifies long-term investment. And none of these vendors handle the parts that matter most to your company. They don't uncover real user pain points. They don't manage internal politics. And they don't ensure that AI fits into business-critical workflows where trust must be earned, not assumed.</p><p>A real product is not a demo, not a prototype, and not a PoC. It is something that makes a measurable difference. It is something people use regularly because it is better than the alternative. It improves outcomes in a repeatable way. And over time, it either generates revenue, reduces cost, or creates undeniable internal efficiency that changes how the organization operates. That bar hasn&#8217;t changed. And neither has the difficulty of reaching it.</p><blockquote><p>We may no longer need to build an AI system ourselves. But we still need to build the AI product. And that part is still hard to get right.</p></blockquote><p><em>That is why knowing how to iterate carefully, intentionally, and always anchored in value is not just helpful for an AI Product Manager. It is foundational.</em></p><h3><strong>This Article Is About Internal AI Products</strong></h3><p>The focus here is on <em>internal AI products</em>. These are the kinds of tools that live inside enterprises, embedded in business workflows, and designed to support teams like <em>sales</em>, <em>HR</em>, <em>customer operations</em>, <em>finance</em>, or <em>IT</em>. In these environments, success is rarely measured by user growth or market share. It&#8217;s measured by adoption, time saved, process compliance, or impact on key business outcomes.</p><p>While many of the principles in this article, especially those around <em>value-based iteration</em>, also apply to external AI products, the nature of iteration is different when the users are customers. External products are shaped by market dynamics, monetization strategies, and competitive positioning. <strong>Internal AI products, on the other hand, grow within an existing system</strong>. You&#8217;re not launching into a blank canvas. You&#8217;re building into legacy workflows, informal shortcuts, and organizational expectations that were never designed with your product in mind.</p><p>That&#8217;s where this approach begins: with the reality of building AI products in the messy middle of real businesses.</p><div><hr></div><h5>Related Article</h5><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;fc5ef5c4-761c-4cb3-8114-777e960cd2b0&quot;,&quot;caption&quot;:&quot;#beyondAI&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How to Prioritize as an AI Product Manager&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-09-08T10:16:38.927Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88466fa7-66c2-461d-a80b-4c27cbaae76f_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/how-to-prioritize-as-an-ai-product&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148637252,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h2><strong>How Internal AI Products Grow: The Value Step Iteration - Method</strong></h2><p>This approach isn&#8217;t an entirely new idea. It builds on established product thinking, using the same principles you&#8217;ll find in <em>Lean Startup</em>, <em>Outcome-Driven Innovation</em>, or <em>Continuous Discovery</em>. What makes it worth calling out here is that it&#8217;s <strong>essential</strong> in AI product development, especially for internal use.</p><p>Internal products operate in highly entangled environments. The product doesn&#8217;t enter a greenfield. It enters a system that&#8217;s already full. It has to coexist with legacy systems, awkward handoffs, informal workarounds, and teams who already have a way of getting things done.</p><p>Even if that way is inefficient, it&#8217;s familiar. <br>And that familiarity has power.</p><p>Growing into that environment means the product must start by fitting into a small, real task. It needs to be narrow enough that it doesn&#8217;t disrupt the system, but valuable enough that people notice the improvement. Once it&#8217;s proven there, it can expand. But not before. Internal AI products don&#8217;t earn adoption through excitement or novelty. They earn it through consistent, grounded usefulness. That means every part of the product must make sense in the context it&#8217;s entering.</p><p><strong>That&#8217;s why each feature, each iteration, and each technical decision must be scoped as a value hypothesis. </strong></p><p>This changes how you prioritize, how you scope, and how you release. </p><p>You don&#8217;t build more unless the last iteration has been adopted. <br>You don&#8217;t expand scope unless what you&#8217;ve already built is solving a real problem. <br>You don&#8217;t solve downstream tasks if upstream usage is still low. </p><p>This doesn&#8217;t slow you down. It keeps your effort aligned with reality.</p><p>So while this approach is not new, and while its principles are shared across other product disciplines, what makes it non-negotiable in internal AI product development is the combination of complexity, ambiguity, and proximity to the user. </p><p>You&#8217;re not building for abstract personas. <br>You&#8217;re building for colleagues. <br>And they&#8217;ll only adopt your product if it delivers clear, immediate, and lasting value. One iteration at a time.</p><h3><strong>The Real Job of Internal AI Products: Workflow Support</strong></h3><p>Internal AI products &#8212; especially those powered by generative AI &#8212; are most effective when they support the people closest to the business problem: <strong>subject matter experts</strong>. These are the <em>analysts</em>, <em>controllers</em>, <em>legal reviewers</em>, <em>strategists</em>, and other domain professionals who carry deep institutional knowledge and apply it through structured, repeatable work.</p><p>These experts are not waiting for full automation. They are looking for support tools that reduce manual effort, eliminate routine steps, and help them move faster and more confidently through their tasks. GenAI can do exactly that. Not by replacing the expert, but by augmenting the steps where friction, repetition, or low-leverage effort slow things down.</p><p>Every task a subject matter expert performs follows a workflow. Some steps are linear. Others loop or require judgment. But each step, regardless of size or visibility, <strong>takes time</strong>. One might take thirty seconds, another half a day. One might be mentally heavy, another just tedious. The key is not to obsess over which step is more &#8220;valuable&#8221;, but the most time-consuming. Because from the expert&#8217;s point of view, the value lies in completing the <strong>entire</strong> workflow so they can produce the deliverable &#8212; a report, a contract, a presentation, a recommendation.</p><p>That&#8217;s why the priority isn&#8217;t evaluating the strategic value of each individual step. It&#8217;s identifying where the most effort accumulates. Because if we can reduce the time or complexity of just one high-effort step, we accelerate the entire task. That creates capacity. It frees up expert time. It opens the door to faster delivery. And faster delivery often has very real financial consequences &#8212; shorter billing cycles, earlier client handoffs, or quicker internal decision-making.</p><p>Each step in a workflow, then, has an indirect but very real impact on delivery &#8212; and therefore on business value.</p><p>This is exactly where <strong>Value Step Iteration</strong> becomes essential. The goal is to solve one real step in the value chain of the SME. Once that&#8217;s done &#8212; once it&#8217;s working, adopted, and useful &#8212; then you move to the next. Each supported step moves the system forward without overwhelming users or overcommitting your team.</p><p><strong>Value Step Iteration</strong> is a way of scoping AI product development by focusing on individual workflow steps &#8212; not features, not UI screens, not model capabilities. It asks one question at a time:</p><blockquote><p><em>Which step in the expert&#8217;s workflow consumes the most time or effort, and can AI meaningfully reduce it?</em></p></blockquote><p>And this is exactly why the <strong>Value Step Method</strong> is so powerful in internal AI product work. It treats every iteration not as a technical milestone, but as a test of whether one step in a workflow can be supported in a way that moves the whole system forward. You don&#8217;t need to automate the full task. You need to support one step, clearly, confidently, and usefully. And when that&#8217;s done, you earn the right to move on.</p><h3><strong>How Versioning Supports the Value Step Method</strong></h3><p>Once we understand that each AI product iteration should support a meaningful step in an expert&#8217;s workflow, the next challenge is structuring how we build &#8212; and how we know when we&#8217;re ready to move forward.</p><p>This is where the <strong>Value Step Method</strong> benefits from a clear versioning model. It gives the team &#8212; and the organization &#8212; a shared language to describe progress. Not just in terms of functionality shipped, but in terms of how much trust has been earned, how deeply the product is used, and how reliably it supports the delivery of actual work.</p><p>We use a semantic versioning-like structure here not only to track internal releases, but to represent <strong>product maturity</strong> through the lens of value creation and adoption. Each version tells a story about how far the AI product has grown into its environment &#8212; and how confidently it supports a step that matters.</p><p>Here&#8217;s how the progression unfolds within the Value Step Method:</p><ul><li><p><strong>0.1 &#8211; 0.3:</strong> Early learning stages. You're validating that a problem exists in the workflow and that AI could reasonably support it. The system may work in parts, but lacks stability. Users are curious, but not yet relying on it. This stage is about listening, testing, and refining your understanding of what to build.</p></li><li><p><strong>0.4 &#8211; 0.6:</strong> A focused, functional slice starts to emerge. You&#8217;ve solved one specific step well enough that some users begin to replace manual effort. The output may still need review, but the time savings or flow improvement is real. This is where <strong>User Acceptance Testing (UAT)</strong> becomes essential, not as a formality, but as evidence that value is landing.</p></li><li><p><strong>0.7 &#8211; 0.9:</strong> The product handles one full task end-to-end with minimal oversight. It has earned trust. The expert begins to <strong>rely</strong> on it, not just experiment with it. Usage is self-sustaining. Feedback shifts from &#8220;is this useful?&#8221; to &#8220;can this be expanded?&#8221; Now the assistant is no longer a pilot &#8212; it&#8217;s becoming part of the real workflow.</p></li><li><p><strong>1.0:</strong> The product is embedded. It operates independently, consistently, and meaningfully supports delivery. It&#8217;s trusted. The workflow flows better with it than without it. If your team stepped away, the users wouldn&#8217;t &#8212; because the tool is now part of how work gets done.</p></li></ul><p>This structure prevents premature scaling and anchors progress to real, observed outcomes. It slows down hype and speeds up clarity. And it keeps everyone aligned on what matters: <strong>earning the right to take the next step</strong>, one clear piece of user value at a time.</p><p>That&#8217;s the essence of the Value Step Method.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qmr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qmr-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qmr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:343466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/163314630?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qmr-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!qmr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc956f167-cc9d-427c-b7e3-ea4d1ab470a6_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>Applying the Value Step Method</strong></h3><p>Let&#8217;s make the Value Step Method tangible. <br>Imagine you&#8217;re building a <strong>Sales Enablement Assistant</strong> to support internal sales teams in preparing for meetings. The aim isn&#8217;t to automate everything. The aim is to reduce effort where it accumulates &#8212; one workflow step at a time.</p><p>Sales reps might spend 30 to 60 minutes before each client meeting collecting scattered information. They pull CRM history, search email threads, check for open tickets, review notes, and try to stitch it all together into something actionable. It&#8217;s inefficient, inconsistent, and error-prone. But every step is required to reach the deliverable &#8212; a well-prepared, high-quality client conversation. </p><p>This is exactly where the Value Step Method applies. You&#8217;re not trying to replace the rep&#8217;s expertise or redesign the entire process. You&#8217;re trying to identify which step consumes the most effort, and reduce it meaningfully, so the whole task moves faster with less friction.</p><div><hr></div><h3><strong>Phase 1: Deconstruct the Workflow</strong></h3><p>You begin by mapping the full set of actions the SME performs to deliver the output. For a sales rep preparing a meeting, the steps might include:</p><ul><li><p>Pulling CRM data</p></li><li><p>Surfacing past communications</p></li><li><p>Highlighting active deals or support escalations</p></li><li><p>Generating talking points or reminders</p></li><li><p>Packaging all of it into a meeting-ready brief</p></li></ul><p>Each step takes time. Some are tedious, some mentally demanding, some dependent on multiple systems. The task can&#8217;t be delivered without all of them, but not all of them are equally painful. The goal is to understand where that effort stacks up, and where AI support would be immediately useful and low-risk to adopt.</p><div><hr></div><h3><strong>Phase 2: Prioritize by Effort &#8212; Not Feature Appeal</strong></h3><p>With the workflow mapped, you now apply Value Step Prioritization. That means choosing where to start <strong>based on effort concentration and feasibility</strong>, not based on what&#8217;s technically impressive or what stakeholders ask for first.</p><p>For example:</p><ul><li><p><strong>Step 1 - Summarize CRM activity</strong> &#8212; Structured, stable, and quick to implement.</p></li><li><p><strong>Step 2 - Add past interactions</strong> &#8212; Higher complexity, but high value for user context.</p></li><li><p><strong>Step 3 - Highlight support issues</strong> &#8212; Relevant data exists, but signals must be interpreted.</p></li><li><p><strong>Step 4 - Format into a briefing</strong> &#8212; Only makes sense once the content is reliable.</p></li><li><p><strong>Step 5 - Suggest talking points</strong> &#8212; High ambition. Trust must already be in place.</p></li></ul><p>This way, you're not only identifying where support is needed in the workflow &#8212; you're also generating a <strong>product roadmap</strong> that reflects real user effort, not internal assumptions. You begin where confidence is high, time savings are visible, and the expert knows the step well enough to evaluate the quality of support. It&#8217;s a roadmap that&#8217;s earned, not imagined.</p><div><hr></div><h3><strong>Phase 3: Iterating an AI Assistant from 0.1 to 1.0 &#8212; One Step at a Time</strong></h3><p>Now you build &#8212; but in <strong>small, scoped releases</strong>, each targeting one meaningful step in the workflow. Every release is a test: <em>Can this specific step be reliably supported in a way that reduces effort and builds trust with the SME?</em></p><p>In this case, you might decide to begin with <strong>Step 1</strong> and <strong>Step 2</strong>, since they both focus on surfacing past interactions &#8212; a critical part of meeting prep &#8212; even though they draw from different source systems. The value is clear, the data exists, and users already know what &#8220;good&#8221; looks like.</p><p>So the first version of the Sales Enablement Assistant focuses <strong>only</strong> on these two capabilities. The team iterates from <strong>0.1 to 1.0</strong> solely around making this functionality useful, stable, and adopted. No additional features. No unnecessary expansion. Just making sure that this slice of workflow is genuinely improved and trusted.</p><p>And because the scope is small and focused, this path can realistically lead to a strong, adopted <strong>MVP 1.0</strong> within <strong>1 to 3 months</strong> &#8212; one that solves a real problem, earns a place in the workflow, and gives you a solid foundation for future iterations.</p><p>Here&#8217;s what iteration looks like:</p><div><hr></div><ul><li><p><strong>Version 0.1 &#8212; First Exposure, First AI Touchpoint</strong></p><ul><li><p>The assistant is introduced to a small group of users</p></li><li><p>It can fetch and summarize CRM data from one system</p></li><li><p>The AI generates briefs with basic metadata and deal context</p></li></ul></li></ul><p>&#127919; <strong>Goal:</strong>  Validate that the AI adds value immediately, even in a narrow scope<br>&#128101; <strong>UAT: </strong>Do users trust the summaries? Does it save them any time?</p><div><hr></div><ul><li><p><strong>Version 0.2 &#8211; 0.3 &#8212; Expand Input Sources, Maintain Clarity</strong></p><ul><li><p>Add a second data source: e.g. past email conversations</p></li><li><p>Merge AI-generated summaries from both systems into one preview</p></li><li><p>Add light metadata (date, contact, topic) so users can verify without switching tools</p></li></ul></li></ul><p>&#127919; <strong>Goal:</strong> See if users start using it unprompted#<br>&#128101; <strong>UAT:</strong> Is the assistant&#8217;s context accurate enough to reduce manual lookup?</p><div><hr></div><ul><li><p><strong>Version 0.4 &#8211; 0.6 &#8212; Structure, Feedback, and Flow</strong></p><ul><li><p>Introduce simple formatting into the briefing: sections, headings, and collapsed views</p></li><li><p>Add a &#8220;Was this helpful?&#8221; feedback prompt for each section</p></li><li><p>Begin timing usage (e.g., do users open it before meetings?)</p></li></ul></li></ul><p>&#127919; <strong>Goal:</strong> Make the assistant&#8217;s presence feel structured, consistent, and safe<br>&#128101;<strong>UAT:</strong> Are users adjusting their workflows to include the assistant?</p><div><hr></div><ul><li><p><strong>Version 0.7 &#8211; 0.9 &#8212; Reliable Prep, Reduced Manual Effort</strong></p><ul><li><p>Auto-deliver the briefing ahead of meetings via Slack or email</p></li><li><p>AI adjusts the summary slightly depending on meeting type</p></li><li><p>Users no longer open multiple systems before calls</p></li></ul></li></ul><p>&#127919; <strong>Goal:</strong> Shift from usage to reliance<br>&#128101;<strong>UAT:</strong> Does the assistant fully replace previous prep steps for most users?</p><div><hr></div><ul><li><p><strong>&#9989; MVP 1.0 &#8212; Fully Embedded Assistant for Prep Tasks</strong></p><ul><li><p>The AI Assistant is now part of the meeting workflow</p></li><li><p>It surfaces relevant context with minimal user input</p></li><li><p>No major gaps remain in the selected steps (Step 1 &amp; Step 2)</p></li><li><p><strong>Success Metric:</strong> Users would notice if the assistant disappeared</p></li><li><p><strong>Optional:</strong> Product team can declare MVP 1.0 <strong>before</strong> reaching 0.9 if adoption and trust are strong</p></li></ul></li></ul><div><hr></div><pre><code>&#129504; <strong>Note : You Don&#8217;t Have to &#8220;Complete&#8221; Every Version
</strong>The path from 0.1 to 1.0 isn&#8217;t a checklist &#8212; it&#8217;s a maturity scale. If you reach a point by version 0.4 or 0.6 where the assistant is clearly helping, being used, and considered trustworthy, you can <strong>declare MVP 1.0</strong> and focus on scaling, onboarding more users, or expanding to the next Value Step.

The key is not building more, but proving usefulness earlier.</code></pre><div><hr></div><h3><strong>Beyond 1.0: Iteration Doesn&#8217;t Stop &#8212; It Evolves</strong></h3><p>Reaching MVP 1.0 means the assistant reliably supports one or more high-effort workflow steps. It&#8217;s used. It&#8217;s trusted. And it has earned its place in the way work gets done. But with the <strong>Value Step Method</strong>, 1.0 isn&#8217;t the finish line &#8212; it&#8217;s just the moment you know the product is alive.</p><p>Beyond this point, iteration becomes more strategic. The goal is no longer to prove usefulness, but to <strong>extend value responsibly</strong> &#8212; without breaking trust, introducing unnecessary friction, or overloading the assistant with capabilities it doesn&#8217;t need.</p><p>Every next step should still follow the same principle: support one real workflow step at a time, and earn the right to build further.</p><div><hr></div><p><strong>Version 1.1 &#8211; 1.3: Deepen the Fit</strong></p><ul><li><p>Improve system performance, AI output clarity, and UI polish</p></li><li><p>Add fallback logic for missing data or integration hiccups</p></li><li><p>Refactor internal flows to reduce manual dependencies</p></li><li><p>Add light internal documentation to help support scale</p></li></ul><p>&#127919; <strong>Goal:</strong> Make the assistant more resilient, faster to use, and easier to maintain &#8212; without changing its core purpose</p><div><hr></div><p><strong>Version 1.4 &#8211; 1.6: Enrich the Context</strong></p><ul><li><p>Introduce additional signals &#8212; e.g. product usage, customer satisfaction, escalation status</p></li><li><p>Tune the AI briefings to adjust based on context (e.g. new client vs long-time customer)</p></li><li><p>Let SMEs contribute improvements to prompt templates or adjust summary preferences</p></li></ul><p>&#127919; <strong>Goal:</strong> Increase the assistant&#8217;s relevance and accuracy without increasing user effort</p><div><hr></div><p><strong>Version 1.7 &#8211; 1.9: Expand the Workflow</strong></p><ul><li><p>Support additional teams such as customer success, technical account managers, or partner sales</p></li><li><p>Adapt briefing templates to their language, tasks, and workflows</p></li><li><p>Start identifying new high-effort steps (e.g. follow-up emails, meeting documentation)</p></li></ul><p>&#127919; <strong>Goal:</strong> Extend the assistant to adjacent use cases that mirror the original workflow &#8212; not reinvent it</p><div><hr></div><h4><strong>&#9989; Version 2.0+: Generalize the Pattern (Only If It Makes Sense)</strong></h4><ul><li><p>Codify the assistant architecture as a reusable internal product pattern</p></li><li><p>Create a briefing generation framework that can be adapted for other domains (e.g. onboarding, procurement, incident response)</p></li><li><p>Offer a lightweight assistant kit to teams facing similar workflow pain</p></li></ul><p>&#127919; <strong>Goal:</strong> Turn what worked in one context into a pattern &#8212; but only if the demand and maturity support it</p><div><hr></div><h4><strong>Each Step Beyond 1.0 Still Follows the Same Rule: Prove It</strong></h4><p>Post-1.0, it&#8217;s tempting to grow faster, add features, or &#8220;productize&#8221; too soon. But the Value Step Method keeps you grounded. You don&#8217;t add unless the last step is adopted. You don&#8217;t expand unless the next workflow is real. And you don&#8217;t generalize until the use case has been proven in more than one place.</p><p>More is only better if it&#8217;s <strong>earned</strong>.</p><h3><strong>Final Thoughts: Adoption Is the Outcome That Matters</strong></h3><p>The hardest part about building internal AI products isn&#8217;t the model, or the integrations, or even the workflow mapping. It&#8217;s building something that people actually choose to use.</p><p>And not just once, out of curiosity &#8212; but every day, because it quietly makes their work easier.</p><p>That&#8217;s what adoption really means. It&#8217;s not feature usage. It&#8217;s not click-through rates. It&#8217;s when a subject matter expert says, &#8220;I rely on this now.&#8221; And in internal environments, that kind of adoption has to be earned, one step at a time.</p><p>The <strong>Value Step Method</strong> helps you do exactly that. It grounds your AI product decisions in reality &#8212; in the shape of real workflows, the weight of actual effort, and the friction that professionals already feel in their day-to-day work. It gives you a way to focus. A way to say no to unnecessary features. A way to align stakeholders around what matters right now, not what might matter in six months if everything goes perfectly.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>And just as importantly, it gives you a way to measure real progress. Not by how much you&#8217;ve built, but by how much value you&#8217;ve delivered &#8212; and how deeply that value has embedded itself into the organization.</p><p>So if you're building internal AI products, the job isn&#8217;t to chase completeness. It&#8217;s to find that one step, in that one workflow, where AI can reduce friction in a way that builds trust. </p><p>Then you do it again. <br>And again. <br>And you stop when the product is truly part of how work gets done.</p><p></p><p><strong>JBK &#128330;&#65039;</strong></p><p></p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[#41 - Before the AI Product, There’s Belief]]></title><description><![CDATA[A letter to my younger self about internal AI Product Management]]></description><link>https://www.jaserbk.com/p/before-the-ai-product-theres-belief</link><guid isPermaLink="false">https://www.jaserbk.com/p/before-the-ai-product-theres-belief</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sat, 03 May 2025 09:43:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!A2W_!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Little Jaser,</p><p>It&#8217;s been a while since I last wrote to you. I wasn&#8217;t planning to write again so soon. But I believe it&#8217;s time. The world you&#8217;re walking into still isn&#8217;t quite ready for your role. And yet&#8212;somehow&#8212;it needs you more than it thinks it does.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This letter isn&#8217;t about certainty. It won&#8217;t offer you a playbook. But it will give you perspective, and maybe a bit of peace. These are not instructions. They&#8217;re observations, shaped over years of quiet work in spaces that reward clarity, trust, and persistence more than they do noise or speed. I&#8217;m writing to help you walk in with your eyes open, your spine straight, and your convictions intact.</p><p>You&#8217;re stepping into a role that will challenge how you think about influence, impact, and even the definition of a product. It won&#8217;t look like what you studied. It won&#8217;t behave like the case studies you admired. And it won&#8217;t reward you with the kind of visibility people usually associate with success. What it will give you, if you let it, is something much deeper. A chance to shape systems from the inside out. Slowly. Quietly. Meaningfully.</p><p>But only if you learn to see the real game being played.</p><h3><strong>Belief Comes First</strong></h3><p>You&#8217;re not here to manage a product&#8212;not in the way most people imagine it. You&#8217;re here to manage belief. Not belief in the technology, but belief in the problem, in the opportunity, in the idea that something could work better than it does today.</p><p>Your first real delivery won&#8217;t be a working prototype. It&#8217;ll be a conversation that shifts someone&#8217;s thinking. A slide that reframes a loose ambition into something concrete. A sentence that helps a team see their situation differently.</p><p>You&#8217;ll be building trust before you build anything else. And trust is a product of its own&#8212;it&#8217;s slow to earn, impossible to fake, and constantly under review.</p><p>So yes, you may not be managing a product on day one. But you are creating the conditions for one to exist. And that&#8217;s the deeper truth: belief is what precedes the product. It&#8217;s the foundation.</p><p>Eventually, yes&#8212;there will be a product. Something visible. Usable. Valuable. But it will only come into being because people believed enough to invest. To participate. To take a risk.</p><p>That&#8217;s what you manage first.</p><p>And ironically, it&#8217;s the most product-like thing you&#8217;ll ever handle&#8212;because if belief doesn&#8217;t get adopted, nothing else ever will.</p><h3><strong>Direction Over Speed</strong></h3><p>There will be pressure to ship fast. To stay visible. To justify your presence with features, demos, and deadlines. And while it&#8217;s true that real success in this role is about direction&#8212;not just velocity&#8212;that doesn&#8217;t mean speed doesn&#8217;t matter. It does. But you need to understand what kind of speed actually moves things forward.</p><p>Speed, in your world, isn&#8217;t about rushing. It&#8217;s about rhythm. It&#8217;s about knowing when to show something early&#8212;not to prove that it&#8217;s finished, but to prove that you and your team can be trusted. A fast prototype is rarely about the solution itself. It&#8217;s about keeping belief alive. It&#8217;s about showing that you understood the problem well enough to reflect it back, even in rough form.</p><p>A well-timed prototype builds credibility. It shows momentum. And when done right, it serves two purposes at once: it validates that you heard the problem correctly, and it earns you the permission to keep going.</p><p>So don&#8217;t fall into the trap of saying &#8220;we&#8217;re still aligning&#8221; for too long. Find small ways to materialize progress. Give people something to react to. Use prototypes not just to test ideas, but to test alignment.</p><p>You&#8217;ll spend more time aligning than building. That&#8217;s true. You&#8217;ll negotiate expectations more than APIs. You&#8217;ll be asked to simplify without oversimplifying. And yes, it will often feel slow.</p><p>But don&#8217;t confuse slowness with stagnation. And don&#8217;t assume that speed means shallowness. Done right, speed builds trust. Momentum keeps attention. And both will help you earn the space to do the deeper, more lasting work.</p><p>That&#8217;s your value. Not speed for its own sake&#8212;but speed with purpose. Prototypes with intention. Direction with momentum.</p><h3><strong>Navigating the Tension</strong></h3><p>You&#8217;ll live in a space no one quite mapped out for you. A space between the promise of AI and the weight of everything that came before it, the legacy. Between what&#8217;s technically achievable on paper and what&#8217;s actually permitted within the realities of enterprise policy, procurement, compliance, and regulation.</p><p>You&#8217;ll see that the pace of innovation&#8212;what you read about in articles and GitHub threads&#8212;doesn&#8217;t match the pace of production in your organization. Even when the solution is clear, you&#8217;ll hit walls: fragmented ownership, risk-averse sponsors, outdated infrastructure, unclear decision rights.</p><p>And at first, that tension will frustrate you. It will feel like a series of blockades, like a test of your patience. But over time, you&#8217;ll learn that this friction isn&#8217;t an accident. It&#8217;s the operating environment. And it&#8217;s where your role actually begins.</p><p>Because this tension is more than bureaucracy&#8212;it&#8217;s a map. It reveals where things stall, where handoffs fail, where no one ever quite took responsibility. It shows you which teams don&#8217;t talk to each other. Which tools don&#8217;t integrate. Which incentives are misaligned.</p><p>And all of that? That&#8217;s where you&#8217;re needed most.</p><p>AI in the enterprise is not just about the model. It&#8217;s about the path the model has to travel&#8212;from idea to integration to actual use. That path crosses technical systems, business processes, cultural resistance, and institutional memory. Your job is not to bulldoze through the tension. It&#8217;s to navigate it. To make it visible. To stitch together a way forward.</p><p>Sometimes that means asking the uncomfortable question. Sometimes it means slowing down just enough to bring the right people into the room. Sometimes it means offering a version one that works within constraints&#8212;so you earn the right to unlock version two.</p><p>You&#8217;re not here to erase the tension. You&#8217;re here to read it like a roadmap. Because that&#8217;s where the work is. That&#8217;s where the opportunity hides. And that&#8217;s where you&#8217;ll build real leverage&#8212;not just in the product, but in the system around it.</p><p><strong>Define the Problem First</strong></p><p>One of the hardest parts of your job won&#8217;t be building the solution. It will be getting people to agree on what the problem actually is.</p><p>You&#8217;ll be handed symptoms disguised as requirements. Neatly packaged requests that sound urgent but fall apart the moment you ask, &#8220;What happens if we do nothing?&#8221; You&#8217;ll see wishlists passed off as strategy decks. You&#8217;ll hear phrases like &#8220;We need an AI for that,&#8221; without clarity on what &#8220;that&#8221; even is.</p><p>And you won&#8217;t be handed a clean brief. Don&#8217;t expect one. In internal environments, real pain often lives in the handoffs, in the grey zones between departments, in the quiet workarounds people invent when processes no longer serve them. These frictions rarely make it into Jira tickets. They live in side comments. In hallway chats. In the Slack threads that keep getting reopened.</p><p>Your job is to sit with that ambiguity longer than most people are willing to. Not to panic. Not to solve it too quickly. But to stay in the discomfort long enough to understand what&#8217;s underneath.</p><p>Because what&#8217;s real is usually not loud. It rarely announces itself in numbers. It shows up in context&#8212;how someone describes their day, what they pause before saying, how three stakeholders give three versions of the same process.</p><p>That&#8217;s where your work really begins.</p><p>Because once you name the thing that no one could quite articulate&#8212;but everyone silently felt&#8212;you shift from being someone who &#8220;does AI&#8221; to someone who can actually fix things.</p><p>And that trust, more than the tech, is what earns you the right to build.</p><p>So yes&#8212;stay in the ambiguity.</p><p>Sit with it.</p><p>Map it.</p><p>Respect it.</p><p>But also&#8212;don&#8217;t get stuck there.</p><p>And I know I told you: build the prototype as early as possible.</p><p>And I still mean it.</p><p>Because sometimes the fastest way to test whether you&#8217;ve understood the problem is to show someone a rough version of the solution. A sketch. A simulation. A flow. Something they can point to and say, &#8220;Yes, but not like that.&#8221;</p><p>Do both. Stay in the problem long enough to see it clearly.</p><p>And then build something&#8212;anything&#8212;that lets others see what you&#8217;ve begun to understand.</p><p>That&#8217;s how momentum begins.</p><p>That&#8217;s how alignment forms.</p><p>That&#8217;s how trust grows.</p><h3><strong>Build What Actually Works</strong></h3><p>Your users won&#8217;t act like startup users. They&#8217;re not early adopters looking for the next big thing. They&#8217;re not customers to win over with feature launches or branding campaigns.</p><p>They are survivors of broken systems. They&#8217;ve seen tools arrive with fanfare and leave without a trace. They&#8217;ve onboarded to platforms that made their lives harder. They&#8217;ve wasted time clicking through interfaces that solved nothing. And quietly, they&#8217;ve stopped expecting that the next solution will be any different.</p><p>That&#8217;s the emotional landscape you&#8217;re walking into. You&#8217;re not just up against complexity&#8212;you&#8217;re up against disappointment.</p><p>So don&#8217;t try to impress them. Don&#8217;t sell them on the latest AI model or show off a slick UI just to get their attention. What they really want is something that finally works. Something that makes their job easier without asking them to rethink everything. Something that fits into the way they already get things done&#8212;only smoother, simpler, quieter.</p><p>And if you can give them that, you&#8217;ll earn something more valuable than praise. You&#8217;ll earn trust.</p><p>Trust that this solution will stick around. Trust that it won&#8217;t make their day harder. Trust that you understand how their world really operates.</p><p>That&#8217;s the bar.</p><p>Not innovation for innovation&#8217;s sake. Not flashy features that no one asked for.</p><p>Build the thing that makes the workflow disappear.</p><p>If you can do that, you&#8217;ve already won.</p><h3><strong>Adoption Beats Accuracy</strong></h3><p>You&#8217;ll build brilliant models no one uses.</p><p>You&#8217;ll run evaluations that hit every metric&#8212;precision, recall, F1 score&#8212;all green. You&#8217;ll have dashboards and charts that prove, objectively, that the model is right. And still, no one will use it.</p><p>Then one day, you&#8217;ll build something small. A quick automation. A simple tool that answers just one question. It won&#8217;t feel like much at first. But people will love it. They&#8217;ll start relying on it. You&#8217;ll hear things like, &#8220;I just use that now,&#8221; or &#8220;This saves me two hours every time.&#8221;</p><p>And that&#8217;s when it will click.</p><p>Accuracy alone doesn&#8217;t create value. Adoption does.</p><p>The model that gets used will always outperform the one that gets ignored&#8212;no matter how brilliant the math behind it is. Because in enterprise environments, the challenge is rarely technical. It&#8217;s behavioral. It&#8217;s emotional. It&#8217;s organizational.</p><p>People don&#8217;t adopt a tool just because it&#8217;s accurate.</p><p>They adopt it because it feels safe. Because it fits into their already overloaded day. Because it doesn&#8217;t create more friction, more questions, more need for explanations.</p><p>And here&#8217;s something you really need to understand: most AI products&#8212;at least today&#8212;are optimization tools. They&#8217;re not existential parts of the process. They&#8217;re not like the billing system, the CRM, the approval chain&#8212;systems that people have to use, whether they like it or not.</p><p>Those systems, even when poorly designed, don&#8217;t have to earn their place. They&#8217;re mandatory.</p><p>But your AI product? It&#8217;s optional.</p><p>You will have to earn every user.</p><p>So build with real-world friction in mind. Build for people who are tired, busy, skeptical. Build for the actual pace and pressure of operations&#8212;not for the clean conditions of a test environment or the assumptions in your design doc.</p><h3><strong>Know the Value, Show the Value</strong></h3><p>If you say your product creates efficiency, you&#8217;d better know how.</p><p>It&#8217;s not enough to throw the word into a slide or a stakeholder update. In enterprise environments, efficiency isn&#8217;t a concept&#8212;it&#8217;s a conversion. So ask yourself: what does it actually unlock? Does it allow revenue to come in earlier? Does it increase total revenue potential? Does it reduce costs in a measurable way? Or does it free up internal capacity that&#8217;s already overstretched?</p><p>Trace the impact, end to end. From model output to business outcome. From screen interaction to savings. From faster workflows to tangible decisions made in less time, with fewer errors.</p><p>Because if you can&#8217;t trace the value, don&#8217;t pretend it&#8217;s there.</p><p>And don&#8217;t expect others to believe in it either.</p><p>Now here&#8217;s something just as important: every product that creates value has a benefit owner. Someone inside the business who gains when your product works. A team that hits its KPIs more easily. A department that meets its targets with less stress. A leader who can finally report progress with confidence.</p><p>Find that person. Name them. Build with them&#8212;not for them. When it&#8217;s time to defend the product, you&#8217;ll want their voice in the room. Because when you speak, it&#8217;s seen as advocacy. When they speak, it&#8217;s seen as proof.</p><p>Value doesn&#8217;t speak for itself&#8212;not in companies this size.</p><p>You have to name it, translate it, and socialize it.</p><p>And if you do that well, your product won&#8217;t just survive.</p><p>It will spread.</p><h3><strong>Measure What Matters</strong></h3><p>Yes&#8212;you must measure it. Even if it&#8217;s inconvenient. Even if it&#8217;s messy. Even if the dashboards don&#8217;t exist yet and the systems don&#8217;t speak to each other. Measure anyway.</p><p>Because what you don&#8217;t measure, you can&#8217;t protect.</p><p>And what you don&#8217;t protect, you&#8217;ll eventually lose.</p><p>Measuring isn&#8217;t just a reporting task. It&#8217;s a declaration of value. It tells others that what you&#8217;re building deserves to be monitored, improved, and resourced. And it tells you&#8212;honestly&#8212;whether you&#8217;re solving a real problem or just building something interesting.</p><p>But I won&#8217;t tell you exactly how to measure. And I don&#8217;t think anyone should. Because the process of figuring it out&#8212;of chasing the signal through broken systems, of having uncomfortable conversations with finance, of getting alignment on what &#8220;impact&#8221; even means&#8212;will reveal more about your company than any training or template ever will.</p><p>You&#8217;ll learn how decisions are made. How money flows. How KPIs are chosen. What gets attention and what quietly dies.</p><p>Measurement is where product management becomes political. Strategic. Real. It&#8217;s where you stop being the person who &#8220;supports&#8221; the business and become the one who helps shape it.</p><p>And yes, it will be hard.</p><h3><strong>Understand the Power Structures</strong></h3><p>A brilliant idea can die in the wrong room. </p><p>Not because it wasn&#8217;t viable. Not because it didn&#8217;t create value.</p><p>But because the timing was off.</p><p>Because someone influential wasn&#8217;t informed early enough.</p><p>Because someone else needed to be the one to introduce it.</p><p>Because the political current wasn&#8217;t flowing in your direction that day.</p><p>This isn&#8217;t cynicism. This is structure.</p><p>In large organizations, decision-making isn&#8217;t always linear or logical. It&#8217;s relational. It&#8217;s layered. It&#8217;s shaped by past projects, previous battles, personal credibility, and invisible lines of influence that don&#8217;t show up in the org chart&#8212;but define what gets greenlit and what quietly dies.</p><p>Your job is to read the system. Learn who really owns what. Learn which departments set direction, and which ones follow. Understand who speaks in meetings, and who people look at before nodding.</p><p>And most of all, know when to push and when to pause.</p><p>Don&#8217;t waste energy trying to force alignment too early. Build it.</p><p>Don&#8217;t manipulate. Navigate.</p><p>Be the one who sees the landscape clearly, who anticipates the question before it&#8217;s asked, who gives others the space to come to the idea on their own&#8212;even if it was yours.</p><p>Influence is not earned through brilliance alone. It&#8217;s earned through clarity, patience, consistency&#8212;and knowing when to let others lead the conversation you started.</p><p>That&#8217;s how things move forward in the enterprise.</p><p>Not just because the idea is good. But because the conditions are ready.</p><h3><strong>Make Governance Your Ally</strong></h3><p>Legal, compliance, risk&#8212;these are not blockers. They&#8217;re not the people you &#8220;deal with at the end.&#8221; They&#8217;re part of the system. And if your product is meant to last, if it&#8217;s meant to scale, they need to be part of it from the beginning.</p><p>Too often, teams treat governance as a final checkpoint. A phase that starts after the product is &#8220;done.&#8221; But if you wait until then, you&#8217;re not asking for input&#8212;you&#8217;re asking for forgiveness. And in most enterprise environments, that&#8217;s not a risk you can afford to take.</p><p>So bring them in early. Treat them like partners, not gatekeepers. Ask how they think about risk. Ask what they&#8217;ve seen go wrong before. Ask what success looks like to them. Because governance is not just about preventing harm&#8212;it&#8217;s about building confidence. And confidence is what lets good products move faster.</p><p>Speak their language. If they care about audit trails, show them how you&#8217;re logging decisions. If they care about model explainability, invite them into how it&#8217;s being approached&#8212;not once it&#8217;s finished, but while it&#8217;s still being designed.</p><p>If you build that trust early, something powerful happens. Governance stops being a slowdown. It becomes a multiplier. It becomes the reason your product doesn&#8217;t just get approved&#8212;it gets championed.</p><p>Because in the enterprise, moving fast doesn&#8217;t mean skipping steps.</p><p>It means designing with every step in mind, from day one.</p><h3><strong>Your Words Will Build the Future</strong></h3><p>You&#8217;ll build more decks than prototypes. And at first, that might feel like a failure. Like you&#8217;re not doing enough of the &#8220;real work.&#8221; But it&#8217;s not a failure. It&#8217;s the job.</p><p>Because inside an enterprise, storytelling is not a soft skill. It&#8217;s a product skill.</p><p>It&#8217;s how you earn buy-in from people who&#8217;ve seen a hundred ideas come and go. It&#8217;s how you help someone far removed from the problem understand why it matters. It&#8217;s how you unlock budget, secure resources, align teams, and get one more shot to keep going.</p><p>And when it&#8217;s done well, your words become infrastructure. They shape how people talk about the problem when you&#8217;re not in the room. They turn ambiguity into something actionable. They turn scattered opinions into shared understanding.</p><p>So take your words seriously. Don&#8217;t just document&#8212;design your communication like you design your product.</p><p>Write like a business analyst. Be specific. Structured. Clear. Anticipate the questions.</p><p>Talk like a founder. With belief. With conviction. With the quiet urgency of someone who knows what this could become.</p><p>Be clear in the doc.</p><p>Be convincing in the room.</p><p>Because in the enterprise, your slides might travel further than your code.</p><p>And your words&#8212;if you use them well&#8212;will open doors your prototype never could.</p><h3><strong>You Won&#8217;t Like Everyone&#8212;And That&#8217;s Fine</strong></h3><p>Some of the most important people you&#8217;ll need to work with will frustrate you. They might be overly political. They might move too fast or too slow. They might operate from ego, from fear, from habit. They might not listen the way you want them to. And truthfully, you won&#8217;t like how they work.</p><p>And still&#8212;you will need them.</p><p>Because they&#8217;ll have access. To decision-makers. To funding. To influence that you, at this point, simply don&#8217;t have. They might be the only ones who can get your product in front of the board. The only ones who know the quiet context behind a &#8220;no&#8221; that no one explained to you. The only ones who can open the door when you&#8217;ve already knocked three times.</p><p>So do yourself a favor: don&#8217;t waste energy trying to change them. Don&#8217;t turn your frustration into resistance.</p><p>Put your ego aside.</p><p>This isn&#8217;t about admiration. It&#8217;s about progress.</p><p>You don&#8217;t have to like everyone.</p><p>They don&#8217;t all have to like you.</p><p>But you do need to find a way to work with them.</p><p>You can draw a boundary without burning a bridge. You can collaborate without compromising your values. And you can be respected, even by people who&#8217;ll never fully understand what you do&#8212;if you show up consistently, deliver what you promise, and make it easy for them to move forward with you.</p><p>That&#8217;s not selling out.</p><p>That&#8217;s strategy.</p><p>And sometimes, that&#8217;s what makes the difference between your product staying stuck and actually making it out into the world.</p><h3><strong>Start With Their Story, Not Your Tech</strong></h3><p>When you sit down with someone about a potential AI project, resist the urge to pitch. Don&#8217;t start with architecture. Don&#8217;t show slides. Don&#8217;t talk about accuracy or use cases from other departments.</p><p>Start with their story.</p><p>Ask about their world. How their day actually starts. What they check first. What slows them down. What drains their team. What decisions take longer than they should. What keeps breaking. What&#8217;s been broken for so long they&#8217;ve stopped bringing it up.</p><p>And then&#8212;just listen.</p><p>Because if you listen long enough, you&#8217;ll hear something no system or dataset can tell you. You&#8217;ll hear context. You&#8217;ll hear history. You&#8217;ll hear how this team works around the tools they were given, and how they&#8217;d work with something if it finally understood them.</p><p>And when someone feels heard&#8212;genuinely heard&#8212;they open up. They stop performing. They stop pitching back. They start trusting you.</p><p>That&#8217;s when you can introduce AI.</p><p>Not as the shiny solution.</p><p>Not as a disruptive force.</p><p>But as a quiet helper. A tool to make the pain smaller. A way to make the work smoother.</p><p>You&#8217;re not selling intelligence.</p><p>You&#8217;re offering relief.</p><p>And if you start with their story, the tech will follow&#8212;because now, it has somewhere real to land.</p><h3><strong>Slides Are Your Craft</strong></h3><p>I already told you that you&#8217;ll build more slides than prototypes. Now let me tell you why that&#8217;s not a limitation&#8212;it&#8217;s a power tool.</p><p>Slides are your programming language.</p><p>What code is to a developer, slides are to you.</p><p>They&#8217;re how you shape the narrative. How you translate complexity into clarity. How you make the invisible work visible&#8212;so people can rally behind it.</p><p>But to use them well, you need to treat them like craft. You need to learn to build for different audiences. What makes a data team nod with interest might completely miss the mark with leadership. What excites your product squad may raise red flags for legal or compliance. One message doesn&#8217;t fit all.</p><p>So learn the mechanics. Learn to use color intentionally&#8212;not to decorate, but to direct attention. Learn the basics of visual hierarchy. Learn what it means when something feels too dense, too fast, too unstructured. Understand how whitespace creates rhythm. How font size can shape priority. How structure creates trust.</p><p>Your slides aren&#8217;t just supporting materials.</p><p>They are thinking tools. Alignment tools. Influence tools.</p><p>They are how you open a conversation with someone new.</p><p>They are how you guide a room that&#8217;s halfway bought in.</p><p>They are how you follow up when you&#8217;re not in the room to speak for yourself.</p><p>You&#8217;re not building to impress.</p><p>You&#8217;re building to align.</p><p>So don&#8217;t outsource this.</p><p>Own it.</p><p>Get better at it.</p><p>And treat your slides with the same intentionality you give to your product.</p><p>Because sometimes, the right slide will do what no prototype ever could:</p><p>It will get people to believe.</p><h3><strong>Give Credit Generously</strong></h3><p>You&#8217;ll work with people who care deeply about being seen&#8212;sometimes more deeply than you&#8217;ll ever know. Even if they never say it. Even if they act like it doesn&#8217;t matter.</p><p>Don&#8217;t underestimate how far a thank-you can go. A name mentioned in a meeting. A quiet message to their manager. A sentence in a deck that says, &#8220;This happened because of their work.&#8221;</p><p>It&#8217;s easy to focus only on outcomes. On roadmaps, metrics, and prototypes. But behind every milestone, there are people solving things in the background&#8212;removing blockers, catching edge cases, smoothing tensions you never even saw.</p><p>Learn to give credit to whom it belongs. Not just because it&#8217;s the polite thing to do. But because it builds trust. It tells your team: &#8220;I see you. I know what you made possible.&#8221;</p><p>And in a world where uncertainty is high and visibility is low, that acknowledgment matters.</p><p>It creates safety.</p><p>It creates loyalty.</p><p>It creates momentum that doesn&#8217;t show up in KPIs, but absolutely shows up in what you can build together.</p><p>Even if you don&#8217;t care much about being recognized, remember this: for someone else, it might mean everything.</p><p>Your words could be the reason someone believes they belong here.</p><p>And that kind of belief is what keeps great people from walking away.</p><h3><strong>Accept Credit With Clarity</strong></h3><p>And while you give credit to others&#8212;freely, frequently, and with care&#8212;don&#8217;t forget to accept it for yourself. Not with ego. With clarity.</p><p>Because in enterprise settings, product management is often misunderstood. It&#8217;s not like in startups or product-native companies, where everyone knows what it means to define a roadmap, validate a need, or align tech and business.</p><p>Here, much of your work happens in the background. Quietly. In pre-reads, in workshops, in Slack threads, in unglamorous decisions that prevent future chaos.</p><p>So when your product succeeds, the impact won&#8217;t always trace back to you. People might thank the data science team. The sponsor. The delivery lead. And they wouldn&#8217;t be wrong&#8212;but they wouldn&#8217;t be seeing the full picture either.</p><p>Unless you help them see it.</p><p>Be clear about your contribution. Not as a claim to ownership, but as a thread in the story. Say, &#8220;Here&#8217;s how I helped move this forward.&#8221; Say, &#8220;This was where product decisions made the difference.&#8221;</p><p>Make your role visible. Especially across departments. Especially when the org doesn&#8217;t quite know what product management actually means in an AI context.</p><p>This isn&#8217;t self-promotion.</p><p>It&#8217;s context.</p><p>It ensures that your role is valued, resourced, and included when it matters most&#8212;next time.</p><p>So yes&#8212;give credit to others. And also give it to yourself.</p><p>Because this is how you make sure you&#8212;and your work&#8212;don&#8217;t stay invisible.</p><h3><strong>Find the Right Spotlights</strong></h3><p>Don&#8217;t wait for the perfect moment to show your product. Don&#8217;t wait until the first use case is finished, polished, and ready for a showcase. If you wait for perfect, you&#8217;ll miss momentum.</p><p>Start earlier.</p><p>Build visibility into your workflow&#8212;not just at the end, but all the way through. Share what&#8217;s in motion, what you&#8217;re learning, what questions you&#8217;re wrestling with. Let people follow the journey, not just the result. Because when people see something grow, they&#8217;re far more likely to care when it arrives.</p><p>And make sure you&#8217;re showing up in the right rooms. Find the spotlights that match your moment:</p><p>Brown bags. Cross-functional communities. Weekly leadership calls. Internal demos. Product forums. Use them to build narrative capital&#8212;quietly, steadily.</p><p>Don&#8217;t limit yourself to your immediate team. Speak to adjacent departments. Drop your name into spaces where curiosity outweighs urgency. Connect with people who may not need your product today but might influence its path tomorrow.</p><p>Because this is how internal awareness grows.</p><p>This is how you create optionality.</p><p>This is how scale begins&#8212;long before it&#8217;s formally requested.</p><p>You&#8217;re not marketing to the outside world. But make no mistake: inside an enterprise, you still need to market.</p><p>Because no one supports what they don&#8217;t know exists.</p><p>And no one funds what they haven&#8217;t seen for themselves.</p><h3><strong>When the World Isn&#8217;t Ready&#8212;But You Are</strong></h3><p>I&#8217;ve told you a lot, Little Jaser. Maybe more than you liked to read. Maybe not yet as much as I wanted to say. But here&#8217;s what I know for sure: the role you&#8217;re stepping into still isn&#8217;t widely understood. Not by every team. Not by every decision-maker. Not even by every product leader.</p><p>You&#8217;ll walk into rooms where people won&#8217;t know exactly what you do&#8212;or why it matters&#8212;until they&#8217;ve seen it play out. And even then, they might not be able to name it.</p><p>That&#8217;s the reality.</p><p>But it&#8217;s not the limit.</p><p>Because this job&#8212;this quiet, persistent, often invisible work of making AI real inside complex organizations&#8212;isn&#8217;t about being recognized. It&#8217;s about recognizing what&#8217;s needed before others do. It&#8217;s about holding the shape of a solution long enough for others to step into it. It&#8217;s about building trust before traction, belief before buy-in, and alignment before action.</p><p>And that&#8217;s something I know you&#8217;re built for.</p><p>Because the truth is, I trust you. Deeply.</p><p>You&#8217;ll figure it out&#8212;just like I did.</p><p>You&#8217;ll find your own rhythm. Your own way of moving between teams, shaping conversations, navigating the fog, and turning it into something real. Something useful. Something that works.</p><p>After all, it&#8217;s you writing to you.</p><p>Only with 25 more years of doing the work.</p><p>Of asking the second question.</p><p>Of learning when to wait, and when to act.</p><p>Of making mistakes that felt big at the time&#8212;and learning, with time, that they were just part of discovering what really matters.</p><p>You already carry what you need.</p><p>Even if you can&#8217;t name it yet.</p><p>Even if the world isn&#8217;t ready to recognize it just yet.</p><p>So go use it.</p><p>Go build the things that won&#8217;t exist unless you do.</p><p>Go be the one who sees what&#8217;s possible, even when the systems around you are still operating like it&#8217;s not.</p><p>And when it feels like no one sees what you&#8217;re really building&#8212;read this again.</p><p>And remember: I see you.</p><p>Elder Jaser</p><p>JBK &#128330;&#65039;</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#40 - Why AI Evaluations Have Never Been Optional for AI Product Managers]]></title><description><![CDATA[From traditional ML to GenAI: how evaluation became the frontline of AI product success.]]></description><link>https://www.jaserbk.com/p/why-ai-evaluations-have-never-been</link><guid isPermaLink="false">https://www.jaserbk.com/p/why-ai-evaluations-have-never-been</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 27 Apr 2025 11:18:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DsaM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p>In AI products, it&#8217;s dangerously easy to pass every technical test &#8212; and still fail the user.</p></div><p><strong>In this article:</strong></p><ul><li><p>Why evaluations are the hidden foundation of great AI products</p></li><li><p>How GenAI evaluations differ radically from traditional ML</p></li><li><p>A real-world example of evaluating a GenAI system</p></li><li><p>Practical mistakes to avoid &#8212; and how to do it right</p></li></ul><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DsaM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DsaM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DsaM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2049349,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/162249308?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DsaM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!DsaM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a89e67a-af12-4dfd-8153-34b2cf8be03f_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>#beyondAI</strong> - If you don&#8217;t deeply understand how to evaluate AI behavior, you&#8217;re not managing a product. You&#8217;re gambling. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>When we build AI products, one mistake can easily slip into the foundation without us even noticing: assuming that if the model works technically, the product will work for users.</p><p>That&#8217;s not true.</p><p>And this is exactly why AI evaluations (AI evals) are so critical &#8212; especially for AI Product Managers.</p><p>Evals aren&#8217;t just a technical health check. They are a way to make sure that the AI system performs in a way that serves the user, under the real conditions and expectations the product must fulfill. If we don&#8217;t understand evaluations deeply &#8212; and I mean beyond just &#8220;accuracy&#8221; or &#8220;precision&#8221; percentages &#8212; we risk building AI solutions that pass technical tests but fail spectacularly when they meet the real world.</p><blockquote><p>I&#8217;ll be honest: I&#8217;m still learning.</p><p>And to stay honest &#8212; this is the first time I&#8217;m going this deep into the new world of AI evaluations.</p><p>Other things had priority.</p><p>(Real PM is speaking here&#8230; we always have to pick our battles.)</p></blockquote><p>AI evaluations &#8212; especially for GenAI and LLMs &#8212; are a rapidly evolving field, and every project teaches me something new.</p><p>With this writing, I&#8217;m sharing what I&#8217;ve learned so far, hoping it helps others who are navigating the same shift. </p><p></p><p>The evaluation landscape changes depending on the type of AI system we&#8217;re dealing with. A traditional machine learning model (say, a churn prediction algorithm) is evaluated very differently from a GenAI model like a chatbot powered by a Large Language Model (LLM).</p><p>As AI Product Managers, we have to know the difference &#8212; and we have to know how to lead teams to design, interpret, and act on evaluation results that actually matter for the product.</p><p>Let&#8217;s get into it.</p><p></p><h3><strong>AI Evals and the Core Mission of Any Product Manager: Creating Value</strong></h3><p>At the end of the day, Product Management is about one thing: creating value for the business by solving problems for users.</p><p>Whether you&#8217;re building a mobile app, an internal tool, or an AI system, this doesn&#8217;t change.</p><p>The twist with AI products &#8212; and especially GenAI products &#8212; is that you can&#8217;t separate the technical behavior of the system from the user experience it creates.</p><p>If the AI model behaves poorly, feels unreliable, or simply doesn&#8217;t align with user expectations, it directly kills the value you&#8217;re trying to generate.</p><p>In traditional software, this separation is possible.</p><p>A button might technically work &#8212; it sends a request to the backend, triggers the right workflow, and returns the expected result &#8212; even if the visual design or wording isn&#8217;t perfect.</p><p>In other words: technical functionality and user experience are distinct layers.</p><p>You can fix usability later without needing to change how the system itself computes or behaves internally.</p><p>But in AI, especially in GenAI, the system&#8217;s behavior is the user experience.</p><p>There&#8217;s no &#8220;under the hood&#8221; you can separate cleanly.</p><p>When an AI writes an email reply, generates a product description, or answers a customer question, the output is the product.</p><p>There&#8217;s no layer between the user and the system&#8217;s core behavior &#8212; no abstraction shield.</p><p>And this is why evaluations in AI Product Management are not optional or secondary.</p><p>They are essential to ensuring that the product fulfills its real-world purpose &#8212; and that the business actually captures the value it hopes to create.</p><h3><strong>Traditional AI Evaluations: Clear Metrics, Narrow Scenarios</strong></h3><p>In traditional machine learning, evaluations are mostly built around structured outputs.</p><p>You might predict a binary outcome (&#8220;Will this customer churn?&#8221;) or a continuous value (&#8220;What&#8217;s the estimated price of this house?&#8221;).</p><p>To evaluate such models, the industry has relied on metrics like:</p><ul><li><p>Accuracy</p></li><li><p>Precision and Recall</p></li><li><p>F1 Score</p></li><li><p>ROC-AUC</p></li><li><p>Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE)</p></li></ul><p>The key thing here is: the evaluation criteria are objective and quantifiable.</p><p>You know the ground truth (whether a customer actually churned or not).</p><p>You know the prediction.</p><p>You can run the math and get a score.</p><p>The role of the AI PM in this setup is relatively straightforward:</p><ul><li><p>Define which metric matters for the business use case (e.g., precision over recall if false positives are very costly).</p></li><li><p>Align the team on the target thresholds that mean &#8220;good enough&#8221; to ship.</p></li><li><p>Monitor model drift or degradation over time.</p></li></ul><p>In short: in traditional ML, evaluations are clean, comparable, and repeatable.</p><p>But then came GenAI &#8212; and the game changed.</p><h3><strong>GenAI and LLM Evaluations: Messy Outputs, Moving Targets</strong></h3><p>When you move into GenAI, especially with LLMs, the nature of the output changes completely.</p><p>Instead of predicting a simple label, the AI now generates free-form text, images, even code.</p><p>The output space is basically infinite.</p><p>And because there is often no single &#8220;ground truth&#8221; answer, traditional metrics break down.</p><p>You can&#8217;t easily calculate &#8220;accuracy&#8221; on a chatbot that gives slightly different but equally acceptable responses to the same question.</p><p>You can&#8217;t just look at a number and say &#8220;this model is ready.&#8221;</p><p>Evaluating GenAI models involves concepts like:</p><ul><li><p>Human-likeness</p></li><li><p>Factual correctness</p></li><li><p>Relevance to the query</p></li><li><p>Completeness of answer</p></li><li><p>Bias, toxicity, and harmful content detection</p></li><li><p>Style and tone alignment</p></li></ul><p>And to make it even more complicated: human judgment is often needed.</p><p>Human evaluators have to assess if an LLM&#8217;s response was helpful, respectful, in line with brand voice, or free of hallucinations.</p><p>In practice, evaluation setups for GenAI now involve a mix of:</p><ul><li><p>Prompt-based testing (feeding in test prompts and evaluating outputs)</p></li><li><p>Rubrics for human raters (scoring outputs against subjective quality criteria)</p></li><li><p>Automated evals using smaller models (&#8220;critique models&#8221;) trained to assess the main model</p></li><li><p>Red teaming (actively trying to break the model by feeding adversarial prompts)</p></li></ul><h3><strong>What AI Evaluation Looks Like in Practice</strong></h3><p>Let&#8217;s take a practical example:</p><p>Imagine you are shipping an internal GenAI tool for customer service agents. The AI suggests draft replies to customer emails. An evaluation setup might look like this:</p><ul><li><p>You define 100 typical customer emails covering different topics: billing issues, product complaints, upgrade requests, cancellations, technical support.</p></li><li><p>You feed these emails into the GenAI system and collect the draft replies.</p></li><li><p>Human reviewers &#8212; preferably real customer service agents &#8212; score the AI replies on dimensions like:</p><ul><li><p>Relevance: Does the reply actually address the customer&#8217;s issue?</p></li><li><p>Tone: Is it polite, professional, and empathetic?</p></li><li><p>Accuracy: Are factual statements (e.g., refund timelines, account policies) correct?</p></li><li><p>Actionability: Is the reply clear on the next steps for the customer?</p></li></ul></li></ul><p>Each of these dimensions might be scored on a scale from 1 to 5.</p><p>You might find, for example:</p><ul><li><p>85% of replies are relevant (good)</p></li><li><p>78% are in the right tone (okay)</p></li><li><p>65% are factually correct (problem)</p></li><li><p>60% are actionable (problem)</p></li></ul><p>As an AI PM, you now have concrete signals.</p><p>You don&#8217;t just &#8220;hope&#8221; the model is good &#8212; you know where it&#8217;s failing for the business and where fine-tuning, guardrails, or post-processing might be needed.</p><p>That&#8217;s the real life of AI evaluation: structured, messy, human-in-the-loop, and critically important.</p><h3><strong>How AI Evals Relate to UX: The Hidden Parallel</strong></h3><p>The more you work with GenAI evaluations, the more you realize: AI evals have more in common with user experience (UX) testing than traditional software testing.</p><p>When we build normal software (no AI involved), we know that:</p><ul><li><p>The code compiles or it doesn&#8217;t.</p></li><li><p>The feature works or it doesn&#8217;t.</p></li><li><p>The button clicks through or it doesn&#8217;t.</p></li></ul><p>But even if everything works technically, the user might still hate the experience.</p><p>Maybe the button is hidden.</p><p>Maybe the flow is confusing.</p><p>Maybe the error message feels rude.</p><p>The only way to find this out?</p><p>UX testing.</p><p>You need to observe how users interact with the product in real conditions. You need feedback that&#8217;s not about whether something works, but how well it fits into the user&#8217;s life.</p><p>With GenAI, it&#8217;s the same.</p><p>A model might respond to a prompt. Technically, it &#8220;works.&#8221;</p><p>But is it clear?</p><p>Is it respectful?</p><p>Is it helpful?</p><p>Is it concise enough, or too verbose?</p><p>Evaluations for GenAI are essentially a form of UX testing for AI behavior.</p><p><strong>What This Means for AI Product Managers</strong></p><p>As AI PMs, our role is to bring evaluation to the center of product thinking, not treat it like a final hurdle before release.</p><p>Here&#8217;s what this practically looks like:</p><ul><li><p>Define evaluation goals early.<br>Before any data science starts, define what &#8220;good&#8221; looks like for the user &#8212; not just technical performance, but experience quality.</p></li><li><p>Mix automated and human evaluations.<br>Understand that GenAI models need both structured evals (where possible) and subjective assessments.</p></li><li><p>Create meaningful prompt sets.<br>Work with your team to design realistic, diverse prompts that represent the full range of user behavior &#8212; not just the happy paths.</p></li><li><p>Continuously test and monitor.<br>GenAI models can drift not only in technical performance but also in tone, helpfulness, and safety.<br>Ongoing evaluations are not optional.</p></li><li><p>Translate eval results into product decisions.<br>Don&#8217;t just hand over evaluation reports to data scientists. Interpret them in the light of business goals and user experience expectations.</p></li></ul><h3><strong>Common Mistakes in AI Evaluations</strong></h3><p>Even experienced teams can stumble when it comes to evaluating AI products.</p><p>Some of the most common mistakes I&#8217;ve seen:</p><ul><li><p>Focusing only on technical accuracy:<br>High accuracy scores don&#8217;t mean users will trust, enjoy, or even accept the AI&#8217;s behavior.</p></li><li><p>Testing only the &#8220;happy paths&#8221;:<br>Evaluation sets often miss real-world edge cases, sarcasm, ambiguous queries, or hostile prompts.</p></li><li><p>Using unrealistic test data:<br>Clean, idealized prompts make the model look good. Real users don&#8217;t write like a textbook.</p></li><li><p>Skipping human evaluation steps:<br>Relying only on automated scores might be faster, but it often misses subtle but critical issues like tone, clarity, or user perception.</p></li></ul><p>Avoiding these mistakes isn&#8217;t just about better testing &#8212; it&#8217;s about delivering an AI experience that feels trustworthy and valuable to real users.</p><p><strong>First Steps Checklist for AI PMs</strong></p><p>When you&#8217;re building your evaluation plan, keep it simple to start:</p><ul><li><p>Define your key evaluation dimensions:<br>What does success look like to users &#8212; relevance, clarity, helpfulness, safety?</p></li><li><p>Design realistic prompt sets:<br>Use messy, real-world examples, not sanitized ones.</p></li><li><p>Mix human and automated evaluations:<br>Don&#8217;t rely on metrics alone &#8212; integrate human review cycles.</p></li><li><p>Evaluate continuously, not just at launch:<br>Plan for monitoring model performance after deployment.</p></li></ul><h3><strong>Final Note</strong></h3><p>The more I work with AI products, the clearer it becomes: We&#8217;re not just building features. We&#8217;re shaping behaviors, expectations, and trust. Evaluation isn&#8217;t a side task. It&#8217;s how we stay honest with ourselves &#8212; and with the people we build for. It&#8217;s how we check if what we&#8217;re creating is truly helping &#8212; or just adding noise to an already noisy world.</p><p>And evaluation is how we stay close enough to see that truth &#8212; and strong enough to act when what we see isn&#8217;t good enough yet.</p><p>That&#8217;s the kind of AI Product Management I believe in.</p><p>JBK &#128330;&#65039;</p><h3></h3><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#39 - Most AI Teams Ship Confidently Into the Void]]></title><description><![CDATA[Prototyping as Discovery: Treating Problem Understanding Like an Asset]]></description><link>https://www.jaserbk.com/p/most-ai-teams-ship-confidently-into</link><guid isPermaLink="false">https://www.jaserbk.com/p/most-ai-teams-ship-confidently-into</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sat, 19 Apr 2025 08:07:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UEKn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#beyondAI</p><p>There&#8217;s a quiet assumption in most tech teams that feels so natural we rarely stop to question it:</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>Code is considered an asset. Problem understanding is not.</em></p><p>That single mental model &#8212; largely unspoken &#8212; silently shapes the way AI products are built. It influences what gets prioritized, who gets funded, how progress is measured, and what success looks like.</p><p>And more often than not, it&#8217;s also the reason why AI products miss the mark &#8212; not because the code was poorly written, or the model undertrained, but because we built something scalable and sophisticated&#8230; that nobody actually needed.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UEKn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UEKn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UEKn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aec84e88-2012-4951-99e1-6721b773958c_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:323658,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jaserbk.com/i/161653310?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UEKn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!UEKn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec84e88-2012-4951-99e1-6721b773958c_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>When Code Becomes the Hero &#8212; and Discovery Gets Forgotten</h3><p>In most organizations, especially those driven by delivery milestones, code is treated as proof of progress. It&#8217;s visible. It&#8217;s documented. It&#8217;s reusable. It shows up in sprint reviews, gets archived in Git, and lives on in the roadmap. People get excited about it because it feels like something you can touch, something you can show &#8212; an asset that endures.</p><p>This perception is deeply ingrained in how product and engineering teams operate. Teams celebrate pull requests and production pushes. Roadmaps are mapped in epics and features. Burn-down charts show velocity, and demos show outcomes. We build. We deliver. We optimize.</p><p>But if we zoom out for a second, we have to ask:</p><p><em>Build what? Deliver why? Optimize toward what pain?</em></p><p>That&#8217;s where things start to unravel &#8212; and where discovery enters the picture.</p><h3>The Invisible Work of Problem Understanding</h3><p>Discovery is where everything starts.</p><p>But in most companies, it&#8217;s not treated like part of the real work.</p><p>It&#8217;s often a prelude, something done at the beginning of a project and then left behind. A few user interviews. A canvas workshop. Some sticky notes in Miro. Maybe even a thorough slide deck with pain points and opportunity spaces.</p><p>And then? It vanishes.</p><p>Even when discovery is done well, it rarely gets maintained, versioned, or reused. It doesn&#8217;t live inside the product backlog or inform quarterly OKRs. It&#8217;s not tracked like code, logged like bugs, or celebrated like a successful deployment.</p><p>That&#8217;s the real risk.</p><p>If discovery isn&#8217;t seen as an asset &#8212;</p><p>it doesn&#8217;t get time.</p><p>It doesn&#8217;t get attention.</p><p>It doesn&#8217;t get the company&#8217;s best minds.</p><p>And in AI Product Management, that&#8217;s not just unfortunate &#8212; it&#8217;s dangerous.</p><h3>The Supposed Forgiveness of Software Development</h3><p>People like to say that in classic software development, you can get away with unclear discovery.</p><p>You ship a feature, see what happens, tweak it, and eventually land on something usable.</p><p>The cycle is iterative. The stakes are lower. It&#8217;s all very forgiving.</p><p>It&#8217;s not forgiving. It&#8217;s just familiar failure.</p><p>We&#8217;ve normalized teams shipping into the dark and hoping that agile rituals will save them later.</p><p>And most of the time, it doesn&#8217;t.</p><p>Unclear discovery in software leads to the same thing it does in AI:</p><p>Wasted resources, lost time, and user problems that remain unsolved.</p><p>The difference?</p><p>AI is more expensive.</p><p>&#8226;&#9;The cost of experimentation is higher.</p><p>&#8226;&#9;The time to validate is longer.</p><p>&#8226;&#9;And the risk of eroding trust &#8212; through wrong answers, hallucinations, or unfair behavior &#8212; is much harder to recover from.</p><p>So no, it&#8217;s not that AI is more fragile than regular software.</p><p>It&#8217;s just that the price of misunderstanding the problem is paid up front &#8212; and in full.</p><p>It&#8217;s more fragile, more data-dependent, and less predictable.</p><p>You can&#8217;t always iterate your way out of a bad starting point.</p><p>You can&#8217;t just refactor your prompts and magically land in product-market fit.</p><p>And you definitely can&#8217;t backtest your way into solving a real human problem if you didn&#8217;t deeply understand the problem in the first place.</p><p>This is why so many AI teams ship confidently into the void.</p><p>They&#8217;re moving fast. They&#8217;re technically capable.</p><p>But their code is built on sand &#8212; assumptions that were never validated, pain points that were never fully understood, users that were never truly involved.</p><p>The result?</p><p>A beautiful solution to the wrong problem.</p><h3>Why the Code Survives &#8212; Even When the Product Fails</h3><p>The irony in most failed AI products is that the code survives.</p><p>It lives on in version control. It gets reused in other experiments. It becomes a library, a model, a reference.</p><p>But the discovery &#8212; the messy, human understanding that should&#8217;ve guided the build &#8212; is lost.</p><p>And that&#8217;s why I started asking myself:</p><blockquote><p>What if we could treat discovery like we treat code?</p><p>What if problem understanding was also seen as an asset &#8212; not just a phase?</p></blockquote><p>I didn&#8217;t want to fight the delivery mindset.</p><p>I wanted to work with it.</p><p>And that&#8217;s how I landed on a shift in thinking I now call:</p><p>Prototyping as Discovery</p><p>It&#8217;s not a new idea, but one not widely spread in the AI dev space. </p><p>Most people think of prototyping as a way to test a technical solution, not a user solution.</p><p>But what if the goal wasn&#8217;t to test your build &#8212; but to explore your understanding?</p><p><strong>Prototyping as Discovery is a mindset shift.</strong></p><p>It means building not to ship, but to learn. And yes &#8212; it goes by many names.</p><p>It&#8217;s about treating early product increments as strategic probes &#8212; ways to uncover real user behavior, real constraints, real patterns in data and usage &#8212; not just to validate assumptions, but to uncover the ones we didn&#8217;t know we had.</p><p>It&#8217;s a way of embedding discovery inside delivery.</p><p>Not as a box to tick before dev starts, but as an ongoing process that grows alongside the code.</p><p>You discover while you build.</p><p>You build while you discover.</p><p>And both outcomes &#8212; the code and the insight &#8212; become valuable assets.</p><h3>What It Looks Like in Practice</h3><p>You don&#8217;t need to restructure your team or get buy-in for a whole new methodology. You just need to start treating early cycles as insight engines.</p><p>Here&#8217;s one way I&#8217;ve framed it:</p><p>&#8226; 2 weeks of focused discovery: interviews, workshops, pain point mapping, data landscape review</p><p>&#8226; 6 weeks of dev: small prototype that targets the most promising problem with the lowest fidelity possible</p><p>&#8226; 2 weeks of follow-up discovery: observe what happened, run validation sessions, collect behavioral data</p><p>&#8226; 6 more weeks of dev: build on the insights and iterate toward a real solution</p><p>Each phase feeds the other.</p><p>Each step produces both code and context.</p><p>The understanding is just as valuable as the functionality.</p><p>Over time, this approach turns your discovery from a one-time effort into a continuously compounding asset.</p><p>And people know it as Continuous Discovery, coined by Teresa Torres. </p><h3>Making Discovery Look Like an Asset</h3><p>In many organizations, the challenge isn&#8217;t that people don&#8217;t believe in discovery &#8212; it&#8217;s that they don&#8217;t see it.</p><p>They don&#8217;t see it in dashboards.</p><p>They don&#8217;t see it in team reviews.</p><p>They don&#8217;t see it in the OKRs or the sprint metrics.</p><p>So part of the work is making discovery visible.</p><p>That might mean:</p><p>&#8226; Keeping a living repository of insights and opportunity spaces</p><p>&#8226; Capturing key learnings from prototypes as artifacts, not just lessons</p><p>&#8226; Measuring insight velocity alongside code velocity</p><p>&#8226; Including discovery-driven pivots in stakeholder updates</p><p>The more you surface these outcomes, the more credibility discovery builds &#8212; not as an idea, but as an investment.</p><p>Something worth time. Worth talent. Worth treating like code. Shareable throughout the entire company.</p><h3>Why This Matters Now &#8212; Especially for AI</h3><p>As AI continues to evolve, the space between technical capability and real-world impact is widening.</p><p>We have tools that can generate, classify, summarize, translate &#8212; almost instantly.</p><p>But what we often don&#8217;t have is clarity on what problems are truly worth solving.</p><p>When the tech gets easier to build, the temptation to skip discovery grows.</p><p>When everyone&#8217;s focused on prompts and models, understanding becomes the forgotten frontier.</p><p>That&#8217;s why this shift &#8212; from discovery as a phase to discovery as a product companion &#8212; is so critical.</p><p>Because in AI, it&#8217;s not enough to have working code.</p><p>We Need Working Insight</p><p>AI products don&#8217;t live or die by model performance alone.</p><p>They succeed when they solve something real.</p><p>They scale when they&#8217;re trusted.</p><p>They endure when they&#8217;re rooted in real understanding of a problem space, a user behavior, or a business gap.</p><p>And that understanding doesn&#8217;t happen by accident.</p><p>So maybe the real question isn&#8217;t:</p><blockquote><p>&#8220;How do we get people to care about discovery?&#8221;</p></blockquote><p>Maybe it&#8217;s:</p><blockquote><p>&#8220;How do we make discovery look like an asset &#8212;</p><p>and deliver like one?&#8221;</p></blockquote><p>That&#8217;s the shift I believe we need.</p><p>Not more discovery slides. Not more workshops.</p><p>But more problem understanding &#8212; embedded in the way we build.</p><p>And more building &#8212; designed to surface real insight.</p><p>So the next time someone asks you to ship quickly,</p><p>ask them what you&#8217;re shipping toward.</p><p>And if the answer is unclear?</p><p>Build something small.</p><p>Pointed.</p><p>Probing.</p><p>And let the discovery begin.</p><p>JBK &#128330;&#65039;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#38 - Assume They Are Wrong]]></title><description><![CDATA[A Necessary Mindset AI Product Managers Need to Find the Right Problems for AI]]></description><link>https://www.jaserbk.com/p/assume-they-are-wrong</link><guid isPermaLink="false">https://www.jaserbk.com/p/assume-they-are-wrong</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 15 Dec 2024 12:01:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Hjmq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#beyondAI - You&#8217;re sitting in a meeting with a department lead who enthusiastically says, &#8220;<em>We need AI to fix our reporting delays.</em>&#8221; Or maybe someone approaches you, saying, &#8220;<em>We&#8217;re struggling to meet our delivery timelines, but we&#8217;re not sure if AI can help.</em>&#8221; Then there are those times when no one approaches you at all, and you&#8217;re left uncovering hidden pain points in processes no one is actively questioning. <strong>Welcome to the world of discovering AI use cases in an enterprise context.</strong></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Hjmq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Hjmq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Hjmq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1406063,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Hjmq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Hjmq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff991a2d3-9ec9-435d-ada6-8aa6918cca30_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>For over a decade as an AI Product Manager, I&#8217;ve been navigating these scenarios. Whether building AI products for end-consumers or internal teams, the underlying purpose of AI remains the same: to make things <strong>easier</strong>, <strong>faster</strong>, and <strong>cheaper</strong>. The real challenge isn&#8217;t in the promise of AI but in finding the right problems for AI to solve&#8212;and doing so efficiently.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In large enterprises, where processes are complex and stakeholders are diverse, discovering AI use cases is rarely straightforward. From my experience, the requests and opportunities typically fall into three distinct scenarios:</p><ol><li><p><strong>Scenario 1:</strong> Someone believes they know the problem and is confident AI is the solution.</p></li><li><p><strong>Scenario 2:</strong> Someone believes they know the problem but isn&#8217;t sure if AI is the right fit.</p></li><li><p><strong>Scenario 3:</strong> No one recognizes the problem, let alone considers AI as a potential solution.</p></li></ol><p>Of these, Scenario 1 is often the easiest to approach. When stakeholders know you as someone who delivers AI solutions, they proactively reach out to you with their ideas. However, that doesn&#8217;t mean it&#8217;s without challenges. Scenarios 2 and 3, on the other hand, require deeper exploration, closer alignment with stakeholders, and a methodical approach to uncover the real opportunities AI can address.</p><div><hr></div><h3><strong>A Crucial Mindset: Assume They Are Wrong</strong></h3><p>Let me share one critical piece of advice, especially for Scenario 1 and 2: <strong>always assume stakeholders are wrong in their assumptions.</strong> This applies to both their understanding of the underlying problem and their belief that AI is the right solution. While this might sound overly cautious, it&#8217;s a mindset that ensures you approach every idea critically and methodically.</p><p>Stakeholders often view problems through the lens of their immediate frustrations, focusing on surface-level symptoms rather than the root cause. For instance, a department lead might say, &#8220;<em>We need AI to make our chatbot smarter.</em>&#8221; On the surface, this sounds reasonable, but upon closer investigation, you might uncover that the real issue lies in outdated or incomplete FAQs feeding into the chatbot. Or perhaps the department lead was actually referring to this root issue but framed it in a way that&#8217;s open to misinterpretation. Either way, relying solely on their initial framing can easily send you down the wrong path.</p><p>The same caution applies to their belief that AI is the solution. While AI is undoubtedly powerful, it&#8217;s not a one-size-fits-all tool. Many challenges can be addressed more effectively&#8212;and often more efficiently&#8212;using simpler approaches like process optimization, basic automation, or off-the-shelf software solutions. The reality is that stakeholders often perceive AI as a magic bullet without fully understanding its capabilities or limitations. That&#8217;s where you come in: <em>to guide them toward realistic, impactful solutions.</em></p><p>Now, I&#8217;m not suggesting you dismiss their ideas outright. On the contrary, validating assumptions is a key part of effective collaboration. The way forward is to ask thoughtful, probing questions that help clarify the situation, such as:</p><ul><li><p><strong>&#8220;What makes you think AI is the right solution here?&#8221;</strong></p></li><li><p><strong>&#8220;What data do we have to support this idea?&#8221;</strong></p></li><li><p><strong>&#8220;Is this problem repetitive or pattern-based?&#8221;</strong></p></li></ul><p>Digging deeper to validate the stakeholder&#8217;s assumption typically takes longer than just asking whether AI is the right fit. If your team&#8217;s focus is solely on delivering AI products, your first step should be to determine if the assumed problem can actually be solved by AI. If it can&#8217;t, you can politely guide the stakeholder to a different team better equipped to address their challenge.</p><p>However, there&#8217;s a tricky downside: if the assumed problem turns out to be incorrect, but the actual underlying issue is something AI could solve, you risk losing that opportunity. &#128578; <em>Well, I never said this was an easy world to navigate.</em></p><p>By approaching every proposed idea with healthy skepticism, you ensure only well-founded, high-impact opportunities move forward. This mindset not only prevents wasted efforts but also positions you as a trusted advisor. Stakeholders will appreciate your thoughtfulness and rigor. They recognize that you&#8217;re not just executing their requests but carefully aligning solutions with their real needs.</p><div><hr></div><h3><strong>Why This Mindset Matters</strong></h3><p>Adopting a skeptical mindset isn&#8217;t about being difficult or contrarian; it&#8217;s about protecting both your resources and your credibility. In the fast-paced, often ambiguous world of enterprise AI, jumping into a solution without validating assumptions can quickly lead to expensive missteps. <strong>Pause</strong> to critically evaluate the problem and its fit for AI. You will minimize your risk but also elevate the quality of your outcomes.</p><p>This mindset is especially important in large organizations, where AI use cases often face scrutiny from multiple angles: leadership wants measurable ROI, technical teams need clear feasibility, and end-users expect seamless solutions. It&#8217;s not about saying &#8220;no&#8221; to every idea but about ensuring the <strong>right problem</strong> is being solved in the <strong>right way</strong>.</p><div><hr></div><h3><strong>Lessons from My Experience</strong></h3><p>In my 10+ years of navigating these scenarios, I&#8217;ve learned that skepticism often leads to unexpected breakthroughs. Stakeholders may approach you with a flawed framing of their problem, but asking the right questions can uncover opportunities that no one saw coming. One of the most satisfying moments in this role is when a stakeholder realizes, mid-conversation, that their initial assumption wasn&#8217;t quite right&#8212;but that together, you&#8217;ve identified a problem far more valuable and solvable.</p><p>I remember a time when a sales team came to me saying, &#8220;<em>We need AI to analyze why we&#8217;re losing deals.</em>&#8221; After digging in, it turned out they already had plenty of insights into why deals were being lost; the real issue was that sales reps didn&#8217;t have an easy way to act on this data in real-time. By reframing the problem, we shifted from building a generic AI analytics tool (<em>is AI needed at all?!?</em>) to developing an AI-powered recommendation engine that suggested the best next steps for reps to take during a deal cycle. The result? Higher adoption, faster decision-making, and a direct impact on revenue.</p><p>It&#8217;s this kind of journey&#8212;<em>from assumption to clarity</em>&#8212;that makes skepticism so powerful. Not only do you deliver better solutions, but you also build stronger relationships with stakeholders.</p><div><hr></div><h3><strong>Making It Practical</strong></h3><p>If you&#8217;re an AI Product Manager or working in a similar role, here&#8217;s how you can adopt this mindset in a way that&#8217;s both efficient and collaborative:</p><ol><li><p><strong>Start with Curiosity, Not Criticism:</strong> Stakeholders may not articulate their problems perfectly, but they often have valuable context. Use their initial ideas as a starting point, not an endpoint.</p><ul><li><p><em><strong>Example</strong></em>: &#8220;<em>That&#8217;s an interesting challenge. Can we explore what&#8217;s behind it?</em>&#8221;</p></li></ul></li><li><p><strong>Be Transparent About AI&#8217;s Strengths and Limits:</strong> Educate stakeholders <em>early</em> about what AI can realistically achieve and where it might not be the best fit.</p><ul><li><p><em><strong>Example</strong></em>: &#8220;<em>AI works great for repetitive, data-driven problems, but this challenge might be better addressed with process changes.</em>&#8221;</p></li></ul></li><li><p><strong>Collaborate to Reframe the Problem:</strong> Use structured methods like workshops, hypothesis-driven discovery, or even simple brainstorming sessions to align on the real issue.</p><ul><li><p><em><strong>Example</strong></em>: &#8220;<em>If the problem isn&#8217;t the chatbot&#8217;s intelligence but the content feeding it, we might need to start there before applying AI.</em>&#8221;</p></li></ul></li><li><p><strong>Don&#8217;t Be Afraid to Pivot:</strong> If AI isn&#8217;t the right solution, guide the stakeholder toward other options. This honesty builds trust and sets the stage for future collaboration.</p></li></ol><div><hr></div><h3><strong>Final Thoughts</strong></h3><p>While this article focused on the mindset required for companies and, in particular, AI Product Managers when identifying the right problems for AI, there are plenty of techniques&#8212;like Hypothesis-Driven Discovery, Value Stream Mapping, Design Thinking, Job Shadowing, or Process Mining&#8212;that can help boil down to root problems and at the same time discover if AI might help. Each of these fits better with specific scenarios introduced here, and not all are necessary. But knowing what tools and methods you can leverage makes Product Discovery easier and much more structured.</p><p>Once you&#8217;re ready, I&#8217;ll write about those methods, showing how they can enhance your process. For now, let&#8217;s focus on building that mindset&#8212;it&#8217;s the foundation for everything else.</p><p>JBK &#128330;&#65039;</p><p></p><div><hr></div><p>P.S. If you&#8217;ve found my posts valuable, consider supporting my work. While I&#8217;m not accepting payments right now, you can help by sharing, liking, and commenting here or on my LinkedIn posts. This helps me reach more people on this journey, and your feedback is invaluable for improving the content. Thank you for being part of this community &#10084;&#65039;.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#37 - AI Beyond Cost Savings ]]></title><description><![CDATA[AI&#8217;s Power to Accelerate Time-to-Market]]></description><link>https://www.jaserbk.com/p/ai-beyond-cost-savings</link><guid isPermaLink="false">https://www.jaserbk.com/p/ai-beyond-cost-savings</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 08 Dec 2024 12:03:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3oU2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code><strong>A quick note:</strong> I see the stats showing people are opening my newsletters, but it&#8217;s hard to tell if they&#8217;re truly resonating or delivering value. I would really love to hear from you &#8212; your thoughts, feedback, or even just a quick comment. </code></pre><pre><code>Thank you for being part of this journey &#10084;&#65039;.</code></pre><div><hr></div><p>#beyondAI - I believe <strong>time-to-market</strong> is critical for every company, whether a scrappy startup or a massive multinational corporation. For both, the speed at which they move from concept to launch determines not only how quickly they gain insights from the market but also how fast they start generating revenue.</p><pre><code><em>It&#8217;s a simple equation</em>: the faster you get your product into customers' hands, the sooner you can iterate, adapt, and deliver something even better.</code></pre><p>To reduce time-to-market, the first step is <strong>understanding the actual processes and workflows</strong> that companies have in place to deliver.Let&#8217;s take a closer look at the <strong>typical process for delivering software</strong>, whether as a product or a service. </p><p>Most companies follow a structured process for product development, one that has evolved over time into what is often considered the <em>standard</em>. But let&#8217;s be honest - just because it&#8217;s the <em>standard</em> doesn&#8217;t mean it&#8217;s the best. <em>And we&#8217;ll come back to that later.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3oU2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3oU2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3oU2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:301883,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3oU2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!3oU2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccc1959f-49a8-4c0f-ad10-9014e908579a_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>These internal software processes are usually aligned with the <strong>Software Development Lifecycle Process (SDLP)</strong>, which consists of six stages:</p><div><hr></div><h3><strong>The Six Stages of the Software Development Lifecycle Process (SDLP)</strong></h3><ol><li><p><strong>Idea or Demand Generation</strong></p><p>This is where it all begins. A need for a product or service is identified, often driven by customer feedback, market opportunities, internal optimization goals, or regulatory requirements. The main goal here is to <em>clearly define the problem</em> and assess the potential impact of solving it.</p></li><li><p><strong>Concept Validation</strong></p><p>At this stage, the idea is evaluated for feasibility. Teams assess whether the solution aligns with company objectives, whether the required resources are available, and if the market is ready. This phase often includes developing a preliminary business case and going through the initial approval processes.</p></li><li><p><strong>Design</strong></p><p>This is where the idea takes shape. Teams create detailed specifications, wireframes, or prototypes to define what the product will look like, how it will function, and how it will integrate with other systems. The focus here is to provide a clear blueprint for the development phase.</p></li><li><p><strong>Development</strong></p><p>This is the stage where the product is actually built. Developers write code, create features, and integrate systems. Whether it&#8217;s done in agile sprints or using a traditional waterfall model, this phase is all about execution.</p></li><li><p><strong>Testing</strong></p><p>Before the product is released, it undergoes thorough testing to ensure it meets quality, security, and functionality standards. This phase includes unit testing, integration testing, and user acceptance testing (UAT). It&#8217;s where bugs are identified and resolved.</p></li><li><p><strong>Delivery or Launch</strong></p><p>Finally, the product or service is released to the market or end users. This includes deployment, marketing efforts, and initial customer support to ensure everything runs smoothly. Feedback loops are typically established during this phase to gather early user insights.</p></li></ol><div><hr></div><p>Each of these stages is then tailored to fit an organization&#8217;s <strong>specific requirements and governance structures</strong>. And here&#8217;s where it gets tricky. These company-specific adaptations often inflate the SDLP in ways that slow it down over time, regardless of whether the process follows a <strong>waterfall, iterative, or agile model</strong>. While the SDLP in its raw form is relatively lean, organizational adjustments can make it cumbersome, leading to various pain points at the <strong>process level</strong>.</p><div><hr></div><h3><strong>Process-Level Pain Points in Time-to-Market</strong></h3><p>Some common <strong>process-level pain points</strong> that stem from how processes are adapted to organizational needs:</p><ul><li><p><strong>Excessive Approvals</strong>: Endless layers of sign-offs can turn what should be quick decisions into weeks of delays.</p></li><li><p><strong>Rigid Governance</strong>: Processes designed to ensure quality often end up becoming the biggest hurdles to speed and agility.</p></li><li><p><strong>Misaligned Goals Across Teams</strong>: Different departments pursuing conflicting priorities can create bottlenecks and delays.</p></li><li><p><strong>Inefficient Resource Allocation</strong>: Waiting on approvals for funding, personnel, or tools can completely stall progress.</p></li></ul><p>These pain points stack up over time. <em>Honestly, I don&#8217;t know a single team that hasn&#8217;t complained about them.</em> It feels like the natural evolution of companies: </p><div class="pullquote"><p>The more mature companies become, the slower they get. </p></div><p>What starts as a framework for <strong>consistency and quality</strong> eventually turns into a maze of inefficiencies that drag time-to-market to a crawl.</p><p>Now, many companies tackle these pains, often investing heavily to identify where processes need streamlining and how to overcome inefficiencies. These efforts typically focus on the process level, which isn&#8217;t wrong&#8212;it can, in fact, significantly reduce time-to-market. However, focusing solely on processes is only part of the solution. There&#8217;s another critical area to optimize: <strong>workflows</strong>.</p><h2>Optimizing Where the Real Work Happens</h2><p>Workflows are the specific activities carried out to complete a task within a process step. While processes define the overarching structure, workflows deal with the hands-on execution of tasks. Optimizing workflows is just as important as streamlining processes, yet it&#8217;s often skipped.</p><p>This isn&#8217;t because companies don&#8217;t want to address workflows. It&#8217;s because those responsible for optimizing processes are usually strategic, high-level thinkers who lack a detailed understanding of the specialized workflows used by domain experts. These are people focused on the bigger picture&#8212;governance, approvals, or resource alignment&#8212;rather than the granular tasks being done on the ground.</p><p><strong>For workflows to be optimized, the drive must come from within, from the very core of where tasks are actually executed.</strong> It&#8217;s about empowering the teams and individuals who know these workflows best to identify inefficiencies and implement changes. This type of optimization requires collaboration between high-level strategists and domain experts to bridge the gap between processes and practical execution.</p><div><hr></div><h3><strong>AI Superpower Reveals at the Workflow-Level</strong></h3><p>As we entered the digitalization era, companies, guided by expert advice, began introducing new software to make workflows more efficient. This was followed by automation technologies like RPA (Robotic Process Automation) and scripting, which automated even more tasks. <em>But then, progress seemed to plateau.</em></p><p>Now, with the emergence of <strong>Generative AI</strong>, we&#8217;re standing on the brink of a new wave of transformation&#8212;automation, or at the very least, <strong>expert augmentation</strong>, that can make workflows even more efficient.</p><p>This potential, however, comes with a caveat:</p><p>Optimizing workflows with GenAI isn&#8217;t about simply applying the technology wherever possible. It requires a <strong>deep understanding</strong> of routines and workflows at the working level. You can&#8217;t just plug it in and expect miracles&#8212;you have to figure out where it truly fits.</p><p>Every GenAI initiative aimed at reducing time-to-market, especially in the context of software development, must be developed in <strong>collaboration with Subject Matter Experts (SMEs)</strong> who own those workflows. These are the people who live and breathe the daily tasks. They know exactly where bottlenecks are, what areas can be improved, and what must remain untouched.</p><p>For internal AI Product Managers, SMEs are not just collaborators&#8212;they&#8217;re key stakeholders, or even better, <em>customers</em> in the process. AI Product Managers need to work closely with SMEs to <strong>ideate, explore, and assess</strong> whether parts of a workflow can be augmented with AI or, in some cases, fully automated.</p><p>When this deep collaboration succeeds, it opens the door to endless possibilities. AI use cases across the software development lifecycle start to reveal themselves, each one holding the potential to reduce inefficiencies and speed up the process.</p><h3><strong>How I Create AI Use Cases to Reduce Time-to-Market</strong></h3><p>Let me walk you through how I typically proceed when creating an idea to support reducing time-to-market in software development. The process is structured but flexible enough to adapt to different organizational needs:</p><div><hr></div><ol><li><p><strong>Understand the Overall Process (e.g., SDLP)</strong><br>Start by mapping out the entire process you&#8217;re focusing on. In software development, this could be the Software Development Lifecycle Process (SDLP). Understand how the stages connect and identify where bottlenecks or inefficiencies might occur.</p></li><li><p><strong>Understand Which Workflows Typically Exist Within a Process Step (e.g., Concept Validation)</strong><br>Dive deeper into a specific process step. For example, in the Concept Validation stage, identify the workflows involved, such as defining requirements, validating feasibility, or aligning stakeholders.</p></li><li><p><strong>Ideate on Potential AI Use Cases for Specific Activities Within a Workflow</strong><br>Once the workflows are clear, focus on individual activities within them. For example, if a workflow involves gathering customer requirements, think about how AI could assist&#8212;perhaps through a Generative AI model that analyzes historical data to pre-fill requirements or suggest templates.</p></li><li><p><strong>Make a Rough Estimate About the Benefit of a Potential AI Use Case</strong><br>Evaluate the impact of your ideas. Would automating a task save time, reduce errors, or free up resources? Estimate potential gains in terms of time savings, cost reduction, or improved output quality.</p></li><li><p><strong>Prioritize the AI Use Cases and Find the Key Stakeholders</strong><br>Not all ideas will have the same impact, so rank them by potential value and feasibility. Then identify the stakeholders&#8212;usually SMEs or team leads&#8212;who are critical to validating the idea and implementing the solution.</p></li><li><p><strong>Approach the Stakeholders to Validate My Assumptions</strong><br>Present your prioritized ideas to the stakeholders. Share your assumptions about the benefits and feasibility and gather their feedback to refine the use cases. Their input is essential for making realistic plans.</p></li><li><p><strong>Get Them on Board and, If Agreed, Start the AI Product Journey</strong><br>Once stakeholders see the value and agree to proceed, bring them on board as collaborators. This is where the actual AI product journey begins&#8212;from proof of concept to full implementation.</p></li></ol><div><hr></div><p>Sure, you can adapt this approach to fit your unique situation. Every company is eager to find ways to improve time-to-market, and maybe now you can take the lead by applying a similar approach. &#128640;</p><div><hr></div><h3><strong>Final Thoughts</strong></h3><p>When I first started writing this article, I planned to simply list AI use cases for improving time-to-market. However, I quickly realized that the approach might not be very helpful. Instead, I decided to share how I think about <strong>processes, workflows, and tasks</strong>&#8212;and how I use this mental model to identify impactful AI use cases. My hope is that this resonates with you more than just a generic list ever could.</p><p>Here&#8217;s what I&#8217;ve touched on in this article:</p><ol><li><p><strong>Time-to-Market is Critical</strong>: In today&#8217;s fast-paced world, speed isn&#8217;t optional&#8212;it&#8217;s the difference between staying relevant or falling behind.</p></li><li><p><strong>Processes and Workflows Are Different</strong>: Addressing inefficiencies at both levels is key to achieving meaningful results.</p></li><li><p><strong>AI Unlocks New Possibilities</strong>: Generative AI is a game-changer, enabling smarter automation and augmenting expertise in entirely new ways.</p></li><li><p><strong>Collaboration is Key</strong>: The best solutions emerge when internal AI Product Managers and SMEs work together to uncover and implement impactful AI use cases.</p></li><li><p><strong>A Structured Approach Works</strong>: By following a clear, step-by-step process, you can ensure your AI initiatives are both practical and valuable.</p></li></ol><p>And here&#8217;s something even more important to keep in mind: stop positioning AI as just a tool for <strong>cost reduction</strong>. That narrative is tired. Instead, highlight its potential to <strong>accelerate time-to-market</strong>&#8212;because that&#8217;s where the real competitive edge lies.</p><p>Sadly, this perspective is often overlooked. Too many companies see AI solely as a way to save money. But imagine what could happen if they realized how AI could help them get to market faster. <em>It might just change everything.</em></p><p><strong>Maybe.</strong></p><p><em>JBK &#128330;&#65039;</em></p><div><hr></div><pre><code><code>A quick note: I see the stats showing people are opening my newsletters, but it&#8217;s hard to tell if they&#8217;re truly resonating or delivering value. I would really love to hear from you &#8212; your thoughts, feedback, or even just a quick comment. </code></code></pre><pre><code><code>
Thank you for being part of this journey &#10084;&#65039;.
</code></code></pre><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#36 - Why Your CEO Needs an AI Inventory Yesterday 🤯🚨]]></title><description><![CDATA[AI Chaos Is Costing You Millions - Fix It with AI Portfolio Management]]></description><link>https://www.jaserbk.com/p/why-your-ceo-needs-an-ai-inventory</link><guid isPermaLink="false">https://www.jaserbk.com/p/why-your-ceo-needs-an-ai-inventory</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 01 Dec 2024 11:51:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nGsH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>#beyondAI</p><p>In the corporate world, the call for more structure and governance in managing AI initiatives is growing louder&#8212;and for good reason. Over the past two decades, we&#8217;ve seen companies enthusiastically dive into AI, hiring experts left and right. Some approached this strategically, while others took a more scattershot approach. <strong>The results? </strong>Varied, to say the least.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>AI teams across organizations have been busy building models: some for internal use, others to enhance their company&#8217;s core services or products. But here&#8217;s the common thread&#8212;most have little to no visibility into what their counterparts in other teams are working on. It&#8217;s not uncommon for one team to discover&#8212;by pure accident&#8212;that another department is building the exact same thing, just for different stakeholders.</p><p>This kind of overlap isn&#8217;t a minor annoyance; it&#8217;s a <strong>serious problem</strong>. When teams unknowingly duplicate work, they multiply costs without multiplying value. Worse, they lose time&#8212;time that could have been spent improving or scaling existing solutions.</p><p>And here&#8217;s the heart of the issue: <em>AI models, much like any other corporate asset, need to be managed with clear ownership, transparent value tracking, and robust usage guidelines.</em></p><p>When companies don&#8217;t think of AI as part of a broader portfolio, they miss out on the opportunity to scale. Anyone who&#8217;s been involved in building a model knows how resource-intensive it is. It&#8217;s a heavy lift, and once that lift is complete, why wouldn&#8217;t you maximize its impact?</p><p>The ideal scenario is this: <strong>build a model once and adapt it for multiple use cases with minimal effort.</strong> That&#8217;s scalability. But the reality? It&#8217;s far from this ideal.</p><p>Take enterprise IT departments as our grown-up cousins, for instance. Most large companies understand that maintaining a well-organized software portfolio isn&#8217;t a luxury&#8212;it&#8217;s business-critical. Why not borrow from these established practices? Why not take what works in IT and adapt it to AI?</p><p>This is why I&#8217;m making the case for every company with a serious investment in AI to introduce <strong>AI Portfolio Management.</strong> It&#8217;s not rocket science. The tools, processes, and frameworks we need are already at our disposal. The challenge isn&#8217;t invention&#8212;it&#8217;s <em>adaptation and integration.</em> We don&#8217;t need to overhaul entire systems. We just need to introduce one extra step into daily workflows&#8212;a step that saves countless others down the line. <em>Seems like a good trade-off, doesn&#8217;t it?</em></p><p>For <strong>AI Product Managers</strong>, this topic is equally relevant. While this article is written for companies at large, it&#8217;s packed with insights to help you as an individual contributor. Whether or not your organization already has portfolio and inventory management in place, understanding these concepts empowers you to:</p><ul><li><p>Manage your AI products more effectively.</p></li><li><p>Capture key details to contribute to a future portfolio.</p></li><li><p>Align your work with broader organizational goals&#8212;and prepare for the moment your work becomes part of something bigger.</p></li></ul><p>Happy reading &#128715;&#65039;</p><p></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nGsH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nGsH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nGsH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:302255,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nGsH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!nGsH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09647dff-0c62-48fb-a388-fb49e29eafeb_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p></p><h3>What Is AI Portfolio Management?</h3><p>It&#8217;s easy to have different associations when we talk about a portfolio. For me, at the beginning of my own portfolio journey, the term conjured up multiple images:</p><ul><li><p>A stock market portfolio&#8212;assets reflecting financial worth.</p></li><li><p>An artist&#8217;s portfolio&#8212;a curated collection showcasing their skills.</p></li><li><p>A job application portfolio&#8212;a selection of your best work proving your capabilities.</p></li></ul><p>What ties all these examples together is a common theme: <strong>they&#8217;re all curated collections that reflect value.</strong> They aren&#8217;t random assortments&#8212;they&#8217;re <em>purposeful, intentional,</em> and designed to give a clear picture of worth.</p><p>This is exactly how we should think about an AI portfolio in a corporate context. It&#8217;s not just a list of all the models your teams have built or the AI initiatives underway. <strong>It&#8217;s a curated collection of assets that reflect the value your organization is generating with AI.</strong></p><p>And, just like a stock portfolio or an artist&#8217;s portfolio, an AI portfolio needs to be actively managed.</p><p><strong>Active management means making tough decisions</strong>:</p><ul><li><p>Which models are worth further investment?</p></li><li><p>Which should be retired or repurposed?</p></li><li><p>What new initiatives will maximize resources and align with company goals?</p></li></ul><p><strong>Every portfolio faces limiting factors</strong>:</p><ul><li><p>For the artist, it&#8217;s the finite space on a gallery wall.</p></li><li><p>For the broker, it&#8217;s the amount of money available to invest.</p></li><li><p>For the corporate AI space, it&#8217;s budgets, technical expertise, data availability, and more.</p></li></ul><p>To make your AI portfolio as valuable as possible, you need to navigate these constraints wisely.</p><div><hr></div><p>There are countless reasons to establish an AI Portfolio Management function. To make it easier for you to decide whether to implement this function&#8212;or not&#8212;I&#8217;ve compiled a list of the key benefits I&#8217;ve come across:</p><h3><strong>1. Strategic Alignment</strong></h3><ul><li><p><strong>Benefit</strong>: Align AI investments with business goals. Identify and prioritize AI projects that drive measurable outcomes like cost reduction, revenue growth, or customer satisfaction, ensuring alignment with strategic business objectives.</p></li></ul><div><hr></div><h3><strong>2. AI Use Case Portfolio Optimization</strong></h3><ul><li><p><strong>Benefit</strong>: Avoid redundancy and maximize value. Evaluate AI use cases to identify overlapping efforts, such as multiple teams working on similar models or data sets, and consolidate efforts to optimize resources.</p></li></ul><div><hr></div><h3><strong>3. AI Risk Management</strong></h3><ul><li><p><strong>Benefit</strong>: Minimize risks in AI deployment. Track AI model performance, compliance with regulations (e.g., GDPR, AI Act), and ethical risks like bias or unintended consequences to proactively address issues.</p></li></ul><div><hr></div><h3><strong>4. Cost and Resource Optimization</strong></h3><ul><li><p><strong>Benefit</strong>: Manage AI investments for maximum ROI. Provide visibility into the costs of training, deploying, and maintaining AI models and identify areas to reduce computational expenses or improve operational efficiency.</p></li></ul><div><hr></div><h3><strong>5. Improved Decision-Making</strong></h3><ul><li><p><strong>Benefit</strong>: Data-driven prioritization of AI initiatives. Use key performance indicators (KPIs) like accuracy, adoption rate, or time-to-value to make informed decisions about continuing, pivoting, or stopping AI projects.</p></li></ul><div><hr></div><h3><strong>6. Enhanced AI Governance</strong></h3><ul><li><p><strong>Benefit</strong>: Ensure compliance, ethical AI use, and accountability. Establish governance frameworks to monitor AI model usage, ensure fairness, and mitigate risks, while providing transparency into AI decision-making processes.</p></li></ul><div><hr></div><h3><strong>7. Supporting AI Transformation</strong></h3><ul><li><p><strong>Benefit</strong>: Facilitate company-wide AI adoption. Plan and coordinate the rollout of AI initiatives across departments, ensuring infrastructure readiness and aligning transformation efforts with business needs.</p></li></ul><div><hr></div><h3><strong>8. Enterprise-Wide AI Architecture</strong></h3><ul><li><p><strong>Benefit</strong>: Maintain a unified and scalable AI infrastructure. Design a robust AI architecture that integrates with existing IT systems and supports data pipelines, model lifecycle management, and deployment workflows.</p></li></ul><div><hr></div><h3><strong>9. Enhancing Collaboration Across Teams</strong></h3><ul><li><p><strong>Benefit</strong>: Foster alignment between data scientists, engineers, and business teams. Provide a shared platform or repository for tracking AI models, datasets, and results to improve collaboration and reduce silos.</p></li></ul><div><hr></div><p>If you carefully went through this list, you might have realized that establishing a powerful AI Portfolio Management function requires one critical foundation: bringing all AI teams across the organization together to document their new, in-development, and live AI products in one centralized place. This step is essential to enable that new function to work effectively.</p><p>And yes, I&#8217;m not saying this is an easy endeavor. It&#8217;s challenging, time-consuming, and requires commitment across all levels of the organization. But it&#8217;s necessary.</p><p>I firmly believe that every disruptive technology&#8212;and by now, I think we can all agree that AI falls into this category&#8212;requires an organizational and processual change to reveal its true potential. Without this shift, we&#8217;ll only see the power of AI at the tech level, never fully realizing its impact on the business side.</p><p>You&#8217;ll know your efforts have been successful once you&#8217;ve built and maintained a robust AI Inventory. Only with an AI Inventory can your AI Portfolio truly unfold the benefits I&#8217;ve outlined above.</p><div><hr></div><h3>What Is an AI Inventory?</h3><p>The word <em>inventory</em> takes me back to my school days when some of us worked part-time at a large grocery store during the summer. The task was simple: inventory. At least, that&#8217;s what they called it. What it really meant was counting every item on the shelves, documenting what was there, and noting what needed restocking. It was tedious, but also revealing. You&#8217;d discover that some products were flying off the shelves while others were collecting dust in the corner.</p><p>This is the essence of an inventory: it&#8217;s a systematic record of what you have. And while it may sound basic, having an accurate inventory is critical to making informed decisions. In retail, it helps manage stock levels. In AI, it&#8217;s a dynamic map of your AI ecosystem, detailing every asset&#8212;what it&#8217;s for, who owns it, how often it&#8217;s used, and how it might be repurposed or reused. Without this map, portfolio management is like trying to curate an art exhibit without knowing which pieces are in storage.</p><p>An AI Inventory provides visibility into the organization&#8217;s AI landscape. It ensures that every AI product and use case is accounted for, enabling the Portfolio Management function to make strategic decisions with confidence. Without an accurate and well-maintained inventory, AI efforts can quickly become fragmented, redundant, or misaligned with business goals.</p><p>So what exactly should be captured in an AI Inventory to support Portfolio Management? Here's a checklist to ensure your inventory captures everything it needs:</p><div><hr></div><h3>Key Details to Capture for Each AI Product &amp; AI Use Case</h3><ol><li><p><strong>General Information</strong></p><ul><li><p>Name of the AI Product or Use Case</p></li><li><p>Description of its purpose and functionality</p></li><li><p>Status (e.g., conceptual, in development, live, retired)</p></li></ul></li><li><p><strong>Ownership and Responsibilities</strong></p><ul><li><p>Owning team or department</p></li><li><p>Product Owner or main point of contact</p></li><li><p>Data Owner (if applicable)</p></li></ul></li><li><p><strong>Business Context</strong></p><ul><li><p>Problem it solves or opportunity it addresses</p></li><li><p>Stakeholders involved (internal and external)</p></li><li><p>Business unit(s) it supports</p></li><li><p>Expected business impact (e.g., cost reduction, revenue growth, efficiency improvements)</p></li></ul></li><li><p><strong>Technical Details</strong></p><ul><li><p>Type of AI model (e.g., regression, classification, generative)</p></li><li><p>Underlying technology or framework (e.g., TensorFlow, PyTorch)</p></li><li><p>Data sources used for training</p></li><li><p>Deployment environment (e.g., cloud, on-premises)</p></li></ul></li><li><p><strong>Performance Metrics</strong></p><ul><li><p>Key performance indicators (KPIs) for the model</p></li><li><p>Accuracy, precision, recall, or other relevant metrics</p></li><li><p>Current performance levels</p></li></ul></li><li><p><strong>Reusability Potential</strong></p><ul><li><p>Similar use cases or models it could support</p></li><li><p>Level of customization required for reuse</p></li><li><p>Dependencies or prerequisites for reuse</p></li></ul></li><li><p><strong>Lifecycle Management</strong></p><ul><li><p>Date of creation or deployment</p></li><li><p>Maintenance schedule</p></li><li><p>Version history</p></li></ul></li><li><p><strong>Compliance and Governance</strong></p><ul><li><p>Data privacy considerations (e.g., GDPR, CCPA compliance)</p></li><li><p>Ethical considerations (e.g., bias assessments)</p></li><li><p>Approval processes it has undergone</p></li></ul></li><li><p><strong>Cost and Resources</strong></p><ul><li><p>Development costs (time and budget)</p></li><li><p>Operational costs (e.g., compute resources, licensing fees)</p></li><li><p>Resource allocation (team members involved)</p></li></ul></li><li><p><strong>Usage and Impact</strong></p><ul><li><p>Frequency of use</p></li><li><p>Business outcomes achieved (quantitative and qualitative)</p></li><li><p>Feedback from end users</p></li></ul></li></ol><h2>Final Thoughts</h2><p>I&#8217;m pretty sure there&#8217;s more that could be tracked in an AI inventory, and the list will likely evolve. But I believe this is a solid starting point. And trust me, implementing even this foundational structure will already be challenging enough.</p><p>That said, I firmly believe <em>unlocking the full potential of your organization&#8217;s AI initiatives requires more than technical innovation.</em> It goes beyondAI. It demands organizational and processual change. The sooner companies embrace this, the sooner the benefits will materialize.</p><p>For <strong>AI Product Managers</strong>, this guide highlights the details you need to capture when building an AI product&#8212;even in the absence of formal portfolio or inventory management.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c4Bt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c4Bt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c4Bt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1951882,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c4Bt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!c4Bt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3924fa2d-1de0-48bd-a7cc-57f6503fe1d5_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>Because one thing is certain: someday, the CEO will ask, <em>&#8220;Who can tell me how much value we&#8217;ve generated from our AI initiatives?&#8221;</em> And in that moment, you&#8217;ll want to be the one still sitting confidently in your seat, ready to report not only on your products but on their contribution to the company&#8217;s overall success.</p><p><strong>So, why not start now?</strong></p><p>JBK &#128330;</p><div><hr></div><p>P.S. If you&#8217;ve found my posts valuable, consider supporting my work. While I&#8217;m not accepting payments right now, you can help by sharing, liking, and commenting here or on my LinkedIn posts. This helps me reach more people on this journey, and your feedback is invaluable for improving the content. Thank you for being part of this community &#10084;&#65039;.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#35 - Too Technical to Succeed?]]></title><description><![CDATA[How I Let Go of the Technical Mindset to Become a Better AI Product Manager]]></description><link>https://www.jaserbk.com/p/too-technical-to-succeed</link><guid isPermaLink="false">https://www.jaserbk.com/p/too-technical-to-succeed</guid><dc:creator><![CDATA[JaserBK]]></dc:creator><pubDate>Sun, 10 Nov 2024 12:20:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VN1V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>#beyondAI - </strong>Recently, I told a peer, <em>"You're thinking way too technically. You need to let go of the past a bit."</em></p><p>This peer came from a deeply technical background. He was a computer scientist, completely immersed in algorithms, and obsessed with accuracy metrics. He believed that if you could build the best solution from a technical perspective, that was all that mattered. He even spent years researching Artificial Intelligence, dedicated to developing NLP systems that could push boundaries. But now, he was a Product Manager in AI&#8212;navigating a space that demanded much more than just technical skills.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Yet, he still approached the challenges with a laser focus on technology. He was always thinking about the smartest algorithms, the highest accuracy, and the most elegant code. We&#8217;d be in meetings, and his mind would jump straight to technical solutions, even when we were supposed to be talking about user experience or business needs. I watched him spend hours refining models with his data scientist when a simpler approach could have done the job just as well. Watching him reminded me so much of my own journey.</p><p>And in truth? The peer I was advising was me.</p><p>When I first transitioned to AI Product Management, I thought my technical expertise was my most valuable asset. And in many ways, it was. But it also became my biggest blind spot. I believed that if I could just architect the best solutions, everything else would naturally fall into place. I had to learn the hard way that being too technical can be just as limiting as not being technical at all.</p><p>This post is for all my technical peers out there who are making the leap into Product Management. And for those who have already made that leap, maybe you need to hear this right now too.</p><p><em>"Hi, my name is Jaser, and I was a 'too-technical' PM."</em></p><p>Happy reading &#128715;&#65039;</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VN1V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VN1V!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VN1V!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1884895,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VN1V!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!VN1V!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a26cbbf-1e27-4383-b7d5-999b93654f9f_1200x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>What It Means to Be "Too Technical" as an AI Product Manager</h2><p>As AIPMs, we often wear our technical expertise as a badge of honor. And it&#8217;s true&#8212;understanding the tech is super important. But here's the thing: if we focus too much on the technical side, we can lose sight of what the user and business really need. Let me share a few lessons I learned along the way:</p><h3>Pitfall 1: Over-Engineering</h3><p>At first, I was always striving for technical perfection, aiming to use the most cutting-edge algorithms and fine-tuning models to the smallest details. I wanted every solution to be flawless. But I quickly learned that over-engineering can slow down progress. The business didn't need perfection; it needed solutions that worked well enough to deliver value&#8212;and deliver it quickly.</p><p>Sometimes, a simpler approach could do the job instead of creating a complex neural network. My urge to use the latest tech often led me to build more than necessary. You can imagine what this meant: it delayed the path to MVP and wasted resources.</p><h3>Pitfall 2: Losing Sight of the End-User&#8217;s Problem</h3><p>My technical mindset often made me jump to how before I fully understood why. I was always so eager to start building a solution, even before I truly understood the user&#8217;s pain points. Over time, I learned that empathy is where effective AIPMs need to begin. You need to put yourself in the user's shoes and really see the problem through their eyes. Only then can you ensure that every solution truly addresses the real needs of the end-user.</p><p>Users might ask for a more intuitive search function, and my technical mind would push me toward building a sophisticated NLP model. But sometimes, all that was needed was an enhanced keyword search with some simple filters, which could have been delivered faster and with less effort.</p><h3>Pitfall 3: Paralysis by Metrics and Data</h3><p>Data was both my comfort zone and my downfall. I always felt uneasy making decisions unless every single metric was perfectly aligned. Have you ever been stuck in the data spiral, waiting for the perfect answer? You can probably guess what happened&#8212;I kept waiting for more data, analyzing, and then overanalyzing. And, as you can imagine, decision-making dragged on, and we ended up missing opportunities. Eventually, I had to face the truth: sometimes, you just need to trust your gut and be willing to test ideas quickly instead of waiting for absolute certainty. This is especially tough for those of us with a technical background because it goes against everything we were taught.</p><h3>Pitfall 4: Over-focusing on Scalability Too Early</h3><p>Coming from a technical background, I was obsessed with making sure every solution could scale for millions of users. And here&#8217;s the funny part&#8212;most of the time, the market wasn't even that big, but my technical mind couldn't let go of this obsession. Scalability is important, sure, but focusing on it too early led me to complicate the architecture for no good reason. I wasted so much precious time when what we really needed was just a simple MVP to validate the idea first and gather real user feedback.</p><p>In one case, instead of creating a simple backend that could handle a small user base and iterating from there, I had the wild idea of a distributed microservices architecture that ultimately would have delayed our go-to-market timeline. Thankfully, my more senior developer in the team&#8212;who would have made a brilliant Product Manager&#8212;steered us in a better direction. Scalability can come later&#8212;first, you need to prove that users even want what you're building.</p><h3>Pitfall 5: Underestimating the Importance of Non-Functional Requirements</h3><p>At first, I thought that if the core AI model worked well, everything else would just fall into place. But believe me, I even had to learn that non-functional requirements are crucial too, funny but the truth. Things like usability, maintainability, and ease of integration&#8212;they're just as important as the core functionality itself. Ignoring these aspects meant we ended up with 'products' that might have been impressive from a technical standpoint, but they were a nightmare to use or integrate. And honestly, can we even call them products if they can't be used? If it can't be used, it's not a real solution. And only real solutions have the potential to grow into successful products. Read more on this idea in the link below. </p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1d51e4dc-8b9d-4198-a566-f32316b87aa1&quot;,&quot;caption&quot;:&quot;#beyondAI&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Path to AI Product&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-07-28T12:30:53.825Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51a5df01-d314-4db6-97c9-2b2844daaa1d_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/the-path-to-ai-product&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:147089662,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>I worked on an AI-driven tool where I poured all my effort into creating a highly accurate recommendation engine. It felt like a huge win to get those metrics looking perfect. But then came the real challenge&#8212;integrating it with our client&#8217;s legacy system. Let me tell you, building the model was the easy part. Making it work seamlessly with old, complex infrastructure? That was a completely different beast. It&#8217;s something I wish I had thought more about from the beginning, especially in a large enterprise environment.</p><div><hr></div><h2>Why Letting Go (Just a Little, and sometimes a little more) Matters</h2><p>Now that I've shared some examples of my mistakes, I truly believe they're closely tied to my technical mindset, my education, and my experiences as a data scientist and developer. But you know what? Reflection is what really helps me uncover my blind spots. I have this habit&#8212;sometimes it's good, sometimes not so much&#8212;of reflecting on almost every aspect of my life. It helps me grow, though I'll admit, it can also lead me to overthink things. And you can google (or chatgpt) to read that overthinking isn't that good. But in this particular case, reflection really helped me understand why I acted the way I did, and it ultimately helped me develop strategies to find a better balance.</p><p>As AIPMs, our mission is to deliver value and solve real problems. I think we can all agree on that now.</p><p>Technical expertise is definitely an asset as an AIPM, but it's only part of what makes us truly effective. The real impact, I found, came when I stepped back from the technical side and learned to balance it with user insights, business goals, and a willingness to prioritize progress over perfection. Once I understood that there are <strong>six key dimensions</strong> I need to focus on as an AIPM to even have a chance at success, my mindset shifted&#8212;from a purely technical one to what people often call a <em>product mindset</em>.</p><p>I've written an entire article about those six dimensions, and you might want to give it a try. It was my key to '<em>letting go</em>.' It's about deeply understanding the interplay between Data, AI, and IT on one side&#8212;the technical trio&#8212;and Business, Governance, and People on the other&#8212;the strategic trio. If you're only strong on one side, you won't find the balance you need.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0cf5bc02-1eb2-405a-8d34-9dcf838075f7&quot;,&quot;caption&quot;:&quot;&#128161; This framework will be steadily improved. You are now reading The Double Trio Framework v2.0 for AI Product Management. You can read about the changes here:&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Double Trio Framework for AI Product Management &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:169499282,&quot;name&quot;:&quot;JaserBK&quot;,&quot;bio&quot;:&quot;I think, talk, and write about AI Product Management for Enterprises, with a focus on helping aspiring AI Product Managers.\n\nLet&#8217;s master the art and science of AI Product Management together &#128330;&#65039;&#127757;&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3af0ce6-7255-4034-88b9-5a1192f49e57_3059x4589.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-05-19T11:41:19.693Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56b1e8b1-3aec-401a-a2d1-bef67e303922_1200x1200.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.jaserbk.com/p/the-double-trio-framework-for-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:144773086,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:5,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;AI Product Management: A World Beyond AI&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ddb7ccd-dfe2-4bc4-b814-c504e372f16f_867x867.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The truth is, that being an AI Product Manager means walking a fine line. <strong>Being too technical can be just as ineffective as lacking technical depth altogether.</strong> Finding that balance was key for me, and maybe it can be for you too.</p><p>And I hope you get what I mean by 'letting go.' Sometimes, you have to let go first to make room for something new. Often, that '<em>something new</em>' turns out to be even better. In this case, it's about letting go of that purely technical mindset&#8212;at least for a while&#8212;so you can make space for a new, broader mindset to grow.</p><p>Where could you benefit from letting go a bit, to create that space for something new?</p><p>JBK &#128330;&#65039;</p><div><hr></div><p>P.S. If you&#8217;ve found my posts valuable, consider supporting my work. While I&#8217;m not accepting payments right now, you can help by sharing, liking, and commenting <strong><a href="https://www.linkedin.com/posts/jaserbk_beyondai-activity-7261343762197491712-jiP6?utm_source=share&amp;utm_medium=member_desktop">here or on my LinkedIn posts</a></strong>. This helps me reach more people on this journey, and your feedback is invaluable for improving the content. Thank you for being part of this community &#10084;&#65039;.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jaserbk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Product Management: A World Beyond AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>