%20copy.jpg)
The following article was written by Kim Brushaber, an attendee of our AI for SaaS event during SXSW, and is published here with her permission.
You can connect with Kim here or check out her website.
---
There was a time when SXSW Interactive was my Super Bowl.
I would show up Thursday, badge swinging, fully prepared to inhale every session the convention center had to offer. I had color-coded schedules. Backup sessions in case the first one filled up. I would go hard from morning until midnight, power through on energy drinks and sheer enthusiasm, and by Tuesday I was a hoarse, slightly unhinged version of myself who had consumed more ideas than meals. It was glorious.
That version of me does not exist anymore. These days I have stepkids, spring break adventures, and a standing appointment at an Arkansas diamond mine that is not going to reschedule itself. My nervous system has opinions now, too. So when Ilmo Lounasmaa, Co-founder and CEO of Softlandia, personally invited me to attend on a panel he was sponsoring about AI and the future of SaaS, I said yes to exactly one event. One session. A seat in the room. No voice loss required.
Boy, was it worth the trade.
The panel brought together Hugh Forrest (former President of SXSW, 35 years in that building), Anand Arivukkarasu (former Head of Product Growth at Meta/Instagram, now investor and advisor freshly landed in Austin), Joshua Liberson (founder of Dobbin AI, with a background in editorial design and brand strategy), Markus Hoefinger (PE investor from Vienna who has built and sold multiple digital agencies), and Liz Bacelar (Head of AI and Advanced Analytics at Under Armour, former Estee Lauder innovation exec, founder twice over). Hugh moderated with the kind of easy authority that comes from having run thousands of these conversations.
I came in as a Product Manager with deep roots in software built for highly regulated industries. I came out with a notebook full of things I wanted to think harder about. Here is what actually landed.
The opening question was designed to surface real disagreement, and it delivered.
Josh, whose company Dobbin AI is only eight months old and already producing more code than a team of 100 engineers could have managed a decade ago, was the measured voice. SaaS is not dead, he said. It is at a violent inflection point. The delivery mechanisms are changing, the team sizes are changing, the UX is changing. But software as a service? That idea is not going anywhere. Turn the page, new chapter.
Anand had the most useful mental model for thinking through the evolution. SaaS has moved through distinct eras: first dashboards (here is your data), then workflows (Salesforce, HubSpot, the tools that embedded themselves into how work gets done), then dialogue (conversational interfaces), and now workspaces. His argument is that AI command centers are becoming the new operating system, the place where work actually happens, and everything else becomes a plugin or connector that slots in underneath. Think of it less like individual software products and more like the new App Store, where the AI layer is the platform and everything else is a skill.
Markus, approaching this from the investor chair, named three disruption patterns worth understanding. The first he called "selfware": technically capable people building their own lightweight tools rather than paying for subscriptions. Why pay monthly for something you can spin up overnight? The second is what happens when a frontier model provider drops a new feature and instantly renders an entire product category questionable (he cited IBM losing significant market cap in a single day when a COBOL migration plugin was announced). The third and most quietly unsettling: SaaS companies becoming pure data backends that agents interact with instead of humans. No one logs in. The agent does it all. That is an okay business to be in, but it is probably not the business most of those companies think they are building.
Liz, speaking from the seat of someone who actually buys enterprise software, landed the most direct take of the afternoon. She has largely stopped buying software. The calculus has shifted from "what do I need to buy?" to "what should I build?" And she is not looking for vendors anymore. She is looking for thinking partners: teams she can co-create with, influence the roadmap with, figure things out alongside. Her line from the Q&A was the one people were still talking about afterward. "I don't want your software. I want to know, can you be part of my team?" She meant it as a compliment to the right kind of company. It landed as a challenge to everyone else.
As a PM, there was a thread in this conversation that I have not been able to put down.
Anand made a distinction that I think is genuinely important for anyone in product, design, strategy, or leadership to understand right now. He separated what he called the "execution layer" from the "navigation layer." In plain terms: most of the AI investment in the world right now is going into getting things done faster. Agents that write code, generate content, handle tasks, automate workflows. The execution layer is getting very, very good, very, very fast.
What is not being built at nearly the same pace is the navigation layer: the human function responsible for deciding what to build, why it matters, and where the whole thing is pointing. That work is not automated. It is barely even acknowledged in most of the AI discourse.
His prediction was that the product leader of the future does not manage a team of five PMs. They manage five AI agent stacks, each doing the work that a human PM would have done. Their job becomes checking the outputs, maintaining direction, writing the tests that validate whether the agents are actually heading somewhere useful, and making the calls that no agent is equipped to make. The job title does not exist yet. The need is already here.
And here is the part that Anand said that made the whole room laugh but really should not be laughed off. He called AI models "drunken geniuses." Pretty drunk, but pretty genius if you slow them down, give them a specific role, and make them think carefully. He made the point that if you assign an LLM the persona of a 50-year veteran accountant in retail and ask it to think slowly and go deep, it gets surprisingly close to what a real domain expert would say. Not all the way there, but closer than most people expect. The flip side is that same model will also tell you that every single idea you have is a billion-dollar company waiting to happen. He joked about buying a hundred domains on GoDaddy because of this. The laugh was real. The warning was too.
When an AI model will enthusiastically validate almost anything you put in front of it, the human skill that becomes most valuable is the willingness to say no. Not to the AI, to yourself. The ability to look at what the tools have built and ask, "Is this actually what we should be building?" That judgment does not come from a prompt. It comes from experience, from knowing the customer, from understanding what the market actually needs versus what it thinks it wants. Product sense, the thing that cannot be automated, just became the most important skill in the room.
Liz framed the same idea from the buyer's chair. A year ago the answer to "can we build this?" was mostly no. Now it is almost always yes. Which means the entire pressure has shifted to a harder question: should we build this? That question requires strategy, user empathy, and the discipline to walk away from things that are technically possible but directionally wrong.
The panel touched on future teams, but I kept listening for something underneath the org chart conversation: what human capabilities become more valuable as the tools get more capable?
Nobody asked this directly, but the answers were scattered through the whole afternoon.
Taste. Liz made an observation that stuck with me. AI models do not really understand aesthetic or quality judgment. If you tell them to make something "better," they do not know what "better" means. They will make it longer, or more formal, or more conventional, because those are patterns in the training data. But they do not have taste. That gap is where human judgment lives. It is also, not coincidentally, where differentiation lives.
Conviction. The ability to hold a hard no under pressure. Against the sycophantic model that agrees with everything. Against the stakeholder who wants the feature. Against the roadmap request that sounds right but pulls you sideways. That has always been hard. In a world where building is cheap and fast, it is going to be both harder and more consequential.
Systems thinking. Josh's framing of product as system rather than tool is the right mental model for this moment. The question is not just what you built. It is how it lands inside an organization: what people need to learn, how adoption actually happens, what gets broken in the process, what it costs to change direction later. That kind of thinking is not a feature you can ship. It is a way of seeing.
Curiosity. This one came from Liz in a different part of the conversation, but it was the observation I keep returning to. She has run AI upskilling programs inside multiple large organizations. She said the adoption divide she has seen is not age. Not seniority. Not technical background. It is fixed versus growth mindset. She has seen people in their 20s dig in their heels and people in their 50s absolutely ace the courses. That matters for how you build teams, how you think about your own development, and honestly, how you think about where you stand.
One of the quieter but more unsettling themes running through the afternoon was the noise problem. Not noise in the technical sense, but noise in the market sense. When the barrier to building software drops to nearly zero, the number of things being built explodes. That sounds like good news, and in some ways it is. But it also means the signal-to-noise ratio in the market is about to get very, very bad.
Anand put his finger on this directly. When creation is essentially free and unlimited, the scarce and valuable thing becomes verification. Is this real? Is it accurate? Does it actually hold up under pressure? He called the people who can evaluate, pressure-test, and stand behind AI-generated work the "gatekeepers," and argued that gatekeeping is about to become a profession in its own right, not just a safeguard someone applies at the end of the process.
Liz added a layer to this from the buyer's chair. She was blunt about the honesty problem she sees in AI marketing right now. Generative AI is, in her words, an unreliable slot machine of content. Every polished production use of it has human oversight and a lot of behind-the-scenes scaffolding making it feel stable. She wants AI companies to lead with that reality rather than sell the fantasy of 100% reliability. The vendors she wants to work with are the ones who come in as honest partners, not hype merchants. In a noisy market, that honesty is itself a differentiator.
The panel was refreshingly consistent about one thing: nobody knows where we will be in two to three years. That humility ran through almost every answer. And yet the conversation kept circling back to patterns and signals that felt worth naming, even without certainty attached to them.
Anand described what he called eight to ten "tsunamis" he is tracking as AI matures. He did not name all of them, but three came through clearly.
The first is the founder tsunami. Building software is becoming accessible enough that we are about to see a wave of new founders who could never have entered the market before. That democratization is genuinely exciting. It is also going to flood the market with products of wildly varying quality, which loops right back to the noise problem above.
The second is the gatekeeper tsunami. As a direct consequence of the first, the people and companies who can verify, certify, and stand behind AI-generated work are going to be in enormous demand. When anyone can create, the person who can tell you whether what was created is actually good becomes very valuable.
The third is regulated industries. This one did not get framed as a tsunami exactly, but Anand flagged it as a significant and largely untapped opportunity. There are entire industries where data simply cannot go into public AI models, where compliance is non-negotiable, and where the slow careful work of earning trust has barely begun. Most AI-native companies are not willing to do that work. The ones that are will find a lot of open space.
Beyond the near term, the panel's five-year bets diverged in interesting ways. Anand's is hardware. He is watching robotics and physical AI, the transition from software agents to agents that operate in the physical world. He noted that China is already moving fast in that direction and he is actively learning about it now.
Liz kept returning to deep learning as the next major wave, and she said it with the tone of someone who has seen this movie before. Her prediction was that around 2029, the industry will have a second major reckoning when deep learning gets close enough to feel real. Her hope is that we use the current moment to actually get our hands around generative AI responsibly, so we are better positioned when that next wave hits. She framed it not as doom but as a sequencing problem: get this right first, then face what's coming.
Markus's view was the most direct: this is not a transformation, it is a revolution. Technology normally takes ten to twenty years to reach every household. Mobile, e-commerce, even social media moved on those timescales. This one, he believes, will be faster.
And Josh closed with the prediction I keep turning over. Companies will eventually conclude it is genuinely irresponsible to run their entire operation on a single AI model. A multi-model world is coming, with thoughtful engineering sitting on top to route the right work to the right model at the right moment. His analogy: no media company today looks back on handing all their content to Facebook in 2010 as a wise business decision. That same clear-eyed skepticism, he argued, belongs in every conversation about AI dependency right now.
Someone in the Q&A asked about AI adoption in highly regulated industries, how to build confidence and demonstrate real value when the organizational culture is slow and risk-averse by design.
I have lived in this world. SQL Compliance Manager. ER/Studio. Kiuwan. Years of building software for the kinds of companies that have entire departments whose job is to say no. The question resonated.
Liz's answer was characteristically direct: if you want to sell into a regulated space, know the regulation before you walk in the door. Not just the general shape of it. The specifics. She used the example of AI in hiring: if you are building AI tools for HR and you do not know that using AI to make hiring decisions is illegal in New York City, you are not ready to be in that market. Full stop. Compliance is not a feature you add later. It is the foundation.
Markus reframed this in a way I genuinely found useful. Regulatory complexity is actually a moat. If your product has the certification, the audit trails, the documented compliance built in, a competitor cannot replicate that in a week even if they can clone your interface overnight. The companies willing to do the slow, unglamorous work of earning that trust are sitting on one of the strongest competitive positions in software right now.
Josh added the organizational layer: legal and procurement are the two biggest internal blockers to AI adoption in large companies right now, and neither one has clear frameworks to work from yet. There is no established case law. There is no standard procurement template. The companies that figure out how to bring AI in through those doors, rather than around them, are going to move significantly faster than everyone else.
For product people, there is something clarifying in that. The architecture of how AI actually gets embedded into organizations is still being written. The people best positioned to write it are the ones who think in systems, hold direction under pressure, ask hard questions about what should actually be built, and understand the humans who are going to live inside whatever gets shipped.
Speed of development is not the constraint anymore. Clarity of direction is.
I traded my color-coded SXSW schedule for one good panel and a clear head. It was the right call.
Thank you to Ilmo and the team at Softlandia for the invitation, and to Hugh, Anand, Josh, Markus, and Liz for making it a conversation worth having.
---
And we thank Kim for attending and sharing this in-depth articel with us.
%20copy.jpg)