Skip to main content

Apoca-optimism: Notes from SXSW

Caroline Casals | Software Architect

April 1, 2026


South by Southwest, SXSW, or simply "south by": no matter how you say it, Austin hosts a one-of-a-kind festival, boasting celebrities, music, art, and cutting edge innovation splashed across downtown.

On the other side of it, I find my brain stuffed with that new-things goodness that only brilliant people having inspiring conversations can bring. And tacos. Really great tacos.

SXSW sign

I can't share the tacos with you all, but I can pull on a few mental threads. Because across very different sessions, from biotech to product strategy to design measurement, I kept hearing the same thing underneath it all: the old ways of knowing what's real, what's valuable, and what's possible are breaking down. And the people who will thrive are the ones learning to navigate by conviction rather than certainty.

There's a word for that feeling. I heard it somewhere in the blur of south-by, and it stuck: apoca-optimism. One of those phrases that makes you go "Yeah. YEAH. That's it exactly." In a world spinning on a tilt-a-whirl of changes and AI upheaval, it's hard to look at what's coming without some sense of dread. Of a massive and imminent ending. But also... maybe something beautiful too? The weird and wild and wondrous things at our feet right now. A raw abundance of possibility.

That tension, between ending and beginning, and the overwhelm of navigating it, ran through everything I heard and saw.

The impossible, now merely difficult

Decoding Nature: How AI is Learning to Program Biology

Take the collaboration between Basecamp Research, Microsoft, and UPenn. Together, they've built an LLM that doesn't speak in human language. It speaks in the language of life itself: DNA. The questions being asked of their model, EDEN, are uncovering new antibiotic targets for an increasingly drug-resistant host of diseases. And the accuracy is staggering: 95% hit rate in predicting antimicrobial function.

Getting there was no meager task, and absolutely not "vibe code." The raw data for such a project was missing, simply not enough sequences to train on. Scientific publications aren't like the rest of the internet. They contain only the end product of thought: years of work distilled into a single paper. For a model, this is like learning to speak English by only hearing the last word of every conversation. Validation was its own problem: you can spot a mangled sentence in a heartbeat, but can you spot a mangled protein? And DNA itself is not a clean language; it's riddled with inconsistencies and "junk" sequences.

But here's the thing: these problems are now merely difficult.

Much ado is made of AI's leaps towards greater efficiency. In essence, being better at familiar flavors of busy. And those improvements are genuinely revolutionary: changing the equation of effort shatters everything from engineering to law practice. But projects like EDEN aren't just doing difficult things more easily. They are doing what was previously impossible.

Hearing smart people share about the miraculous work they've done, sitting fifty feet away, talking to a room full of people eagerly taking notes... there's something contagious in that.

Prospectors and prospecting

How to Build AI-First Products: Models, Memory, Mastery

Not everyone had stories of miraculous change. There was also a sober sifting of the meaningful from the hype. I particularly appreciated this session, because it asked the multi-million-dollar question: in a gold rush, how many prospectors actually strike gold?

There can be little doubt that hype is in abundance. Much like the early ages of the internet or mobile devices, there's a sense of urgency to "just add AI." But in the scramble to not be left behind, some efforts are not just pointless, but quite costly. Remember Jasper AI, the content-writing darling? Mountains of seed money, and then the foundation models simply got better and swallowed the value proposition whole. Or BloombergGPT: millions in investment, rendered obsolete in months when GPT-4 not only matched but outperformed it.

We're far enough into this era that the blunders have had time to mature and be plucked. So what separates the products that endure from the ones that get swept away?

The ground moves fast when models improve faster than your product roadmap. Durability doesn't come from wrapping AI in a pretty shell, or specialized training. It comes from building something that foundational models can't have and competitors can't easily catch up to. The model is not your moat, the data it’s built on is.

Directionally rigorous, not falsely precise

Beyond Beautiful: A Data-Driven Framework for Design ROI

Every day we're asked to make decisions faster, with more data, and higher stakes. So how do you act with conviction when the ground won't stop moving? I found that satisfyingly missing puzzle piece in a session on measuring the real ROI of design. On its face, a brass tacks topic: how do you talk the budget people into letting you do beautiful things? But the deeper message was the one that tied everything together for me.

The presenters had built an actual formula for predicting design's fiscal impact, scoring problem severity, design influence, and execution quality to estimate return on investment. What struck me was that the most important thing about it wasn't the math (which was pretty cool). It was the posture. The willingness to say: we can't prove this precisely, but we can prove it directionally, and that's enough to act on.

Their phrase for it was perfect: directionally rigorous, not falsely precise.

In a world where data has never been cheaper or more abundant, I think this is essential framing. Humans, and our AI agents, make surprisingly poor decisions in information-rich environments. We cherry-pick what already proves what we want. Call it cognitive bias or context poisoning; it's the same root issue. By letting go of the false promise of precision and more-is-more thinking, and focusing on the harder to measure shape of truth, we can gain actual insight. We may not have all the data, but we usually have order of magnitude understanding. Our six-figure design updates are solving an eight-figure problem. Let’s stop worrying about the precision on our estimates. 

This applies far beyond design. It's the same discipline that separates the durable AI product from the flash-in-the-pan one. It's the same instinct that let the EDEN researchers push forward without clean data or easy validation. Knowing you can't be exactly right, and building anyway. With rigor, with humility, with direction.

What I brought home

There are more intertwining threads from SXSW than would fit here. But these were the ones I carried out of the murmur and burble of downtown Austin:

The impossible is now merely difficult and our old sense of what's "realistic" can no longer be trusted. A gold rush is underway, and many prospectors will fail because they're chasing the first sparkle, instead of getting real about where to focus. And in all of it, the skill that matters most is learning to be directionally right rather than precisely comfortable.

The world is terrifying and extraordinary. The people who showed up at SXSW aren't pretending otherwise. They're learning to build in the turbulence.

That, and the tacos. The tacos were really something.

Tacos at SXSW
Proudly written with editorial assistance from my good buddy Claude.

 

 


Recommended Next
Artificial Intelligence
SXSW 2026: Where Health, AI, and the Systems That Shape Us Converge
purple graphic with phase2 @ sxsw
Artificial Intelligence
How AI Is Rewriting the Rules for Digital Experiences
Hands on laptop with AI overlay of digital numbers and code.
Jump back to top