Your portfolio manager just spent an hour on a client call explaining why they’re positioned the way they are in emerging markets. Brilliant insights. Exactly the kind of thinking that should be in your thought leadership.
Two weeks from now, you’ll ask them to write something for your quarterly commentary. They’ll stare at a blank page, hate every minute of it, and produce three paragraphs you’ll spend days trying to fix.
In our work with investment industry clients, this is an extremely common occurrence, and it points to what we believe may be one of the highest-leverage AI use cases. AI could capture what managers say in a variety of environments and turn it into something worth publishing. Most firms just haven’t figured out how.
Most Firms Are Stuck In Low Leverage Use Cases
Here’s a common use of AI we see often. Marketing teams open ChatGPT, type in a prompt about their firm’s investment strategy, and ask it to write a white paper. What comes back sounds professional. It has the right structure. And it’s completely useless. Why? Because it’s based on what the internet knows about investing, not what your portfolio manager actually thinks.
Or firms have AI write a first draft of a blog post. The portfolio manager reads it and says, “This isn’t what I would say at all.” Or worse, the manager lets it go because it’s easier than articulating a unique viewpoint. Now you’re editing something that’s wrong from the start instead of working with something that captures real thinking.
Testing AI tools across client projects over the past two years shows a clear pattern. Firms that use AI as a writing tool get garbage. Firms that use AI as a capture tool to get content does the most to heighten their positioning in the market.
Most firms are still experimenting with basic AI use cases like transcription, first drafts, and summarizing documents. The technology is still evolving and firms are figuring out what’s possible. But even basic use cases fail when you don’t understand the underlying process.
What Actually Works
Your portfolio manager talks constantly. There are client calls explaining recent positioning changes; prospect meetings walking through the investment thesis; internal investment committee debates about sector views; webinars answering questions about market conditions; media interviews sharing their outlook. These are all natural settings where manager thinking is solicited and tested through interactions with other knowledgeable people. But when content is created for marketing, too often, all of that thinking disappears.
Most firms archive white papers and quarterly letters. Maybe conference pitch decks. But client meetings? Prospect calls? Internal discussions? Those conversations happen and vanish. The expertise your portfolio manager demonstrates five times a week just evaporates.
AI can be applied to greatly enhance the process of generating quality content for a huge range of applications while dramatically reducing the burden on investment managers. Here’s just one case study we’ve examined: a firm we are familiar with decided to capture manager thinking by transcribing everything the manager said to colleagues and clients about investment strategy. That included, for example, quarterly client calls, a conference presentation on emerging markets, two media interviews about Fed policy, internal investment committee meetings where the portfolio manager explained portfolio changes – the roster of opportunities to gather manager insight will be familiar to most investment firms.
They uploaded all of it to NotebookLM. Not ChatGPT, Copilot, or Claude. That is an important distinction because those public-facing tools blend your content with everything they know from the internet. NotebookLM only works with what you give it. Upload six client calls, and it can only pull from those six calls. It can’t add generic market commentary or standard investment language. That constraint turns out to be exactly what you need.
For firms using Google Workspace, NotebookLM integrates directly with Google Drive. You can import transcripts and recordings without downloading and reuploading. Search your Drive for specific files or pull in entire folders of content.
Now when this firm needs a blog post, they’re drawing from six months of the portfolio manager’s actual commentary. When they draft the quarterly letter, they can see what the portfolio manager said to clients in March versus September. A qualified content creator can spot where thinking evolved and where it stayed consistent, and package up the right content to meet the needs of a given audience
That same library can – and should – be used as a baseline source for all marketing and reporting content, including social posts, conference presentations, webinar slides, email campaigns, and on and on.
Now, the portfolio managers review drafts and say “Yeah, that’s what I think” because it’s built from what they said, not from what AI thinks someone in their position should say.
AI works when you feed it specific knowledge from your firm. It fails when you ask it to generate content from nothing.
Why This Matters More Than You Think
An allocator we know who reviews hundreds of managers said something striking recently. “I used to love reading portfolio manager commentary. Now half of it is obviously AI-written. I can tell a mile away.”
The historical David McCullough once famously said, “writing is thinking,” and that quote captured the essence of why original content is valuable – it allows a reader to understand who you are and to build a sense of trust. One key problem of AI content generated from public sources is that it does not reflect any specific person’s thoughts. And once a reader recognizes the hand of AI, the content ceases to become valuable, and it ceases to become read.
That’s what that allocator colleague of ours meant by his comment – not that firms shouldn’t use AI, but that public AI tools shouldn’t be used to articulate the firm’s viewpoint. Firms using AI to generate commentary from public market information all sound the same. When your content is indistinguishable from what 50 other firms published that month, you lose credibility with the people who decide whether to invest with you.
One $50 million hedge fund we know learned this the hard way. Their quarterly letters were very obviously written by AI, down to the generic structure ChatGPT defaults to. When a reader pointed it out, the manager was shocked. He’d been talking his thoughts into his phone, letting AI write it, giving it a quick look, and sending it out. He had no idea allocators could tell.
The investment space is highly competitive, and breaking through requires a distinctive point of view. The firms that stand out sound like themselves because their content reflects what their portfolio managers think when talking to clients or presenting at conferences.
Here’s another truly critical advantage that an internal AI tool, like NotebookLM, can provide: the tool can spot inconsistencies before allocators do. You can see the ways your thinking evolved from Q1 to Q3. You can catch when your website describes a three-step process but your pitch book shows four steps. You can identify when different people at your firm are describing the strategy differently.
What Gets in the Way
Very few of the clients we work with have adopted this model as yet. Indeed, what we typically see are firms rushing to adopt AI tools without thinking through the goals or the process needed to achieve them. AI experimentation can then become a target when company leadership freaks out about compliance or data security.
Another obstacle is underestimating the human expertise required to integrate AI tools into a larger process. AI tools can absolutely streamline your processes, but you need people who know how to achieve this goal. For example, you need people with institutional knowledge – employees who understand your investment approach well enough to recognize when AI gets it wrong. You need editors who understand how and why AI tools fail and can look for those failures in a quality control check. Critically, you need someone skilled at prompting these tools and organizing the captured content into something coherent. AI doesn’t eliminate the need for expertise. It shifts where you apply it.
The other obstacle is expecting AI to do too much. It’s not going to write your white papers. It’s not going to replace your content team. It’s a tool for capturing and organizing thinking. That’s valuable, but only if you use it for what it’s good at.
Where to Start
When working with clients, we’ve had the most success by developing specific use cases and testing the creation process. Identify the conversations that offer the most insight, and then tie them to the process you use for a specific type of communication. Here’s one test we like: record your next quarterly client call or conference presentation, then use AI tools to create a blog post from it, versioned to all the client bases you serve.
We suggest you try both internal and public-facing AI tools to determine how best to enhance and quality control the output. Remember that tools like NotebookLM organize your captured content and only use what you upload. ChatGPT, CoPilot, and Claude will blend your content with their general knowledge, which sometimes helps and sometimes dilutes your firm’s voice.
Either way, you’ll need a human editor. These tools help you capture and organize thinking. They don’t replace the expertise needed to turn that into content that sounds like your firm.
If it saves time or improves quality, expand. If not, figure out why before trying something else. The goal isn’t using AI. It’s capturing the expertise your portfolio managers already demonstrate when talking to clients and turning it into content people want to read.
Kylelane Purcell is President of Purcell Communications, a financial content firm specializing in writing and editing for asset managers.
Dan Sondhelm is CEO of Sondhelm Partners, which helps boutique asset managers attract investors, start conversations with investors, and build recognizable brands in crowded markets.
Connect

