Skip to main content
Schedule a Conversation

Leadership & Communications

AI As A Teammate: The New Collaboration Code

Genine Fallon·Dec 2025·10 min read

Last updated March 29, 2026

For most of the last decade, AI was treated as a tool: a way to automate, accelerate and optimize. But 2025 changed the conversation. When algorithms became good enough to reason, contextualize and anticipate, they stopped being tools and started becoming teammates. That subtle shift, from control to collaboration, is the most profound leadership test of our time. The firms that treat AI as a relational evolution, not just a technical upgrade, will define the next era of performance.

Executive Summary

The competitive edge in 2026 belongs to leaders who build cultures where AI amplifies human judgment through collaboration, not cultures where AI replaces human thinking.

  • More than 70% of companies have integrated generative AI into daily operations (McKinsey, November 2025), but productivity gains remain uneven because most organizations optimized for speed, not collaboration.
  • A joint Harvard Business School and BCG study found that consultants using AI completed 12.2% more tasks and worked 25.1% faster, but only when they treated AI as a collaborative partner rather than a shortcut.
  • Global employee engagement fell to 21% in 2024 (Gallup, 2025), and AI adoption without human-centered change management risks accelerating that decline.
  • The competitive edge in 2026 belongs to leaders who build cultures where AI amplifies human judgment, not cultures where AI replaces human thinking.

  • The Tool Era: When Mastery Outpaced Meaning

    The tool era produced impressive adoption metrics but flattened thinking, as teams automated processes without asking why those processes existed in the first place.

    The adoption numbers look impressive on paper. By 2025, 75% of workers were already using AI at work, with nearly half having started within the previous six months (Apollo Technical/Microsoft, 2024). The Federal Reserve Bank of St. Louis reported that workers using generative AI saved an average of 5.4% of their work hours per week, translating to a 1.1% aggregate productivity increase across the entire workforce.

    But here's what those numbers don't capture: the quality of the work, the depth of the thinking, the engagement of the teams producing it.

    I've spent more than two decades working at the intersection of investor relations, strategic communications and technology. At the core, my work has always been about people. How we think. How we build trust. How we communicate under pressure. And what I watched happen during the tool era was a kind of intellectual flattening. Teams automated their processes and stopped asking why the processes existed in the first place. Efficiency became a reflex. Curiosity became an afterthought.

    Leaders quietly realized that faster wasn't the same as smarter.

    The Teammate Era: A Different Kind of Intelligence

    AI now collaborates by surfacing patterns across thousands of documents, stress-testing theses in real time, and flagging inconsistencies that multiple rounds of human review missed.

    Today, AI doesn't just execute. It collaborates. It surfaces patterns a human analyst would miss across ten thousand documents. It stress-tests an investment thesis against contradictory data in real time, as the shift in AI-driven deal origination demonstrates. It flags tonal inconsistencies across a 40-page investor deck that three rounds of human review didn't catch.

    The Harvard Business School and BCG study is worth examining closely. Consultants who used AI as a collaborative partner, iterating with it, questioning its outputs, refining together, saw measurable gains in both speed and quality. Those who used AI as a replacement for their own thinking, accepting outputs at face value, produced work that was often confident-sounding but analytically shallow. The tool didn't determine the outcome. The relationship with the tool did.

    I saw this firsthand in my work with AI Owl, a platform that helps organizations build AI fluency. When teams began using it as a collaborative partner rather than a replacement, creativity surged, fear dropped and momentum returned. The shift wasn't technological. It was relational.

    "AI doesn't replace the human work that matters most: discernment, judgment, what leaders can't delegate. What it does is free up the cognitive space to do that work better. But only if leaders frame it that way. Every team I've worked with that treated AI as a threat fractured. Every team that treated it as a thinking partner accelerated."

    Genine Fallon, Managing Director of Capital Formation and Investor Relations, Praxis Rock Advisors.

    Trust as Infrastructure

    With 40% of workers needing reskilling within three years and global engagement at 21%, organizations must invest in communication and change management alongside AI deployment.

    IBM's 2025 report found that 40% of global workers will need reskilling within three years because of AI-driven task shifts. That number should alarm every executive who hasn't invested in change management alongside technology deployment.

    Gallup's 2025 State of the Global Workplace report paints an even starker picture: global employee engagement fell from 23% to 21% in 2024, costing the world economy an estimated $438 billion in lost productivity. Twenty-nine percent of disengaged employees cited a lack of clear, honest or consistent communication from leaders as a primary factor.

    Now layer AI adoption on top of that. Organizations asking their teams to collaborate with intelligent systems while failing to communicate the purpose, the boundaries, the expectations, and the support structure are compounding an existing trust deficit. The technology isn't the problem. The leadership gap around the technology is the problem.

    Teams that flourish with AI will treat it the way great leaders treat people: with clear expectations, consistent feedback and psychological safety. That means creating space to experiment without penalty. It means being transparent about what AI can and cannot do. It means investing in training that builds confidence, not just competence.

    The Human Shift: Bravery Over Perfection

    AI collaboration demands psychological safety, public experimentation, and honest iteration, qualities that most corporate cultures still penalize rather than reward.

    Collaboration with AI means relinquishing the illusion of control. It means experimenting publicly, learning loudly and trusting that mistakes are data, not failures.

    This is harder than it sounds. Most corporate cultures still reward certainty and penalize vulnerability. Leaders who say "I don't know, let's figure this out together with AI" are taking a real professional risk in environments that prize polished answers. But that posture, honest and iterative, is exactly what AI collaboration demands.

    The smarter our tools become, the more courage they require from the people using them.

    BCG's 2024 research reinforced this: GenAI doesn't just increase productivity, it expands the range of tasks workers can perform. But that expansion only materializes when workers feel safe enough to push into unfamiliar territory. Without psychological safety, AI becomes another tool that a handful of power users adopt while everyone else watches from the sidelines.

    Where the Next Alpha Lives

    AI is table stakes; the alpha comes from four practices: starting with hypotheses, collaborating in loops, using AI to reveal blind spots, and treating AI as a thinking partner.

    The leaders who will define 2026 aren't those who deploy the most models. They're the ones who integrate AI into cultures of trust, curiosity and creative tension.

    AI is table stakes. Alpha comes from application. And application is a human problem, not a technical one.

    From my own field experience, four practices consistently separate teams that generate real value from AI and teams that generate activity:

    First, start with a hypothesis, not a blank page. When we tested three competing investment theses using AI, the tool surfaced tension points that refined the narrative in ways no single analyst would have caught independently.

    Second, collaborate in loops, not bursts. Iterating through regulatory, operational and market constraints with AI sharpens clarity and aligns stakeholders faster than sequential human review.

    Third, use AI to reveal blind spots, not solve them. In one engagement, AI identified subtle tone inconsistencies across investor materials that had survived three rounds of editing. The AI flagged the problem. Humans made the decision about how to fix it.

    Fourth, treat AI like a teammate, not a threat. An AI fluency initiative I worked on increased productivity and measurably reduced fear. Once employees saw AI as a collaborator, not a replacement, creativity returned.

    The future of work won't be measured by who works fastest. It will be measured by who works wisely, with both the courage to collaborate and the judgment to know when the human answer is the right one.


    Context

    The shift from AI-as-tool to AI-as-teammate is a leadership, communication, and cultural challenge that requires the same discipline as effective investor relations and strategic communications.

    AI adoption is accelerating across every sector, but the firms capturing real value are the ones investing in the human infrastructure around the technology, not just the technology itself. The shift from AI-as-tool to AI-as-teammate is a leadership challenge, a communication challenge and a cultural challenge. Getting it right requires the same discipline that drives effective investor relations and strategic communications: clarity of message, consistency of execution and deep attention to how people experience change.

    Praxis Rock Advisors works with PE firms, fund managers and portfolio companies on autonomous agent infrastructure where clear communication and trust-building are operational requirements, not soft skills. Schedule a conversation to discuss how this applies to your firm.


    Frequently Asked Questions

    Track three things: task completion quality (not just speed), team confidence levels with AI tools over time, and the ratio of AI-generated outputs that get accepted without revision versus those that require significant human editing. If that last number is climbing, your team is outsourcing thinking, not collaborating. The Harvard/BCG study showed the distinction clearly: quality gains only appeared when humans stayed actively engaged with the AI's output.

    Framing it as a productivity tool and leaving it at that. When AI is positioned purely as "do more, faster," teams hear "we need fewer of you." The organizations I've seen succeed frame AI as a capability expansion: you'll be able to do work you couldn't do before. That framing changes the emotional response from fear to curiosity, and that emotional shift determines adoption rates more than any training program.

    In IR, AI excels at pattern recognition across LP communications, identifying which narratives resonate with specific investor profiles, stress-testing pitch materials against known objections and monitoring competitive positioning across peer fund marketing. But the relationship, the trust, the read on what an LP actually needs to hear: that remains entirely human. The best IR professionals I know use AI to sharpen their preparation so the human conversation lands harder.

    Smaller firms actually have an advantage here. They have fewer layers of approval, faster feedback loops and cultures where experimentation is already the norm. The AI fluency gap isn't about budget. It's about leadership posture. A managing partner who openly uses AI in their own workflow and talks about what works and what doesn't creates more cultural change than a six-figure training program at a larger firm.

    It's the single biggest predictor of success. Gallup's data on engagement decline tells the story: when people don't feel safe communicating honestly with leadership, they won't experiment with unfamiliar tools. AI adoption in low-trust environments becomes performative. People use it to check boxes, not to think differently. Building the safety to fail publicly with AI is a prerequisite, not a nice-to-have.

    Related Articles

    Leadership & Communications

    Why Leaders Cannot Outsource Communication

    Executive communication is the one function you can't delegate. Praxis Rock's capital intelligence platform handles the research, targeting, and infrastructure layers of institutional outreach so leaders can focus on the conversations that actually require their voice.

    Genine Fallon · Nov 2025

    Fundraising

    Private Equity in 401(k) Plans: What the DOL Rule Means for Fund Managers

    The DOL proposed opening $10.1T in 401(k) assets to private equity. 1% allocation means $101B in new capital. What fund managers need to know about timing, structure, and risk.

    Jeff Baehr · Apr 2026

    Fundraising

    What Is a Placement Agent? Fees, Process, and When to Hire One

    Placement agents charge 1.5-2.5% success fees to raise PE capital. Here's how they work, what they cost, and when hiring one actually makes sense.

    Jeff Baehr · Mar 2026

    Ready to see what this infrastructure can do for your firm?

    Schedule a Conversation