It's been a long time since I wrote anything here. Not because I had nothing to say — honestly, the opposite. The more experience you accumulate, the harder it becomes to package it neatly into something worth publishing. At some point you just get on with the work.
But recently, on a project that pushed me to rethink some long-held assumptions about how we build and maintain test automation, I found myself thinking: this is worth sharing.
So here we are.
A bit of context
I've been in QA and test automation for over 20 years. I've built frameworks from scratch, inherited ones that were held together with hope and custom utilities, and led teams through the whole lifecycle — greenfield builds, scale-up, and the messy "we need to refactor everything" conversations.
I've lived through the Selenium era. Properly lived through it. And I still believe it was the right tool at the right time.
But something has shifted — not just in tooling, but in how modern teams think about automation. That shift started becoming very visible to me a couple of years ago when I began working more seriously with Playwright, and more recently, bringing AI into parts of the workflow.
This isn't a trend piece. It's just what I've seen.
What I kept noticing in Selenium-based frameworks
None of this is a criticism of Selenium itself. The problem was rarely the tool — it was the weight of everything built around it.
Most mature Selenium frameworks I've worked with share the same story: they were built in phases, by different engineers, each solving a real problem at the time. Page Object Models. Custom wait utilities. Layered abstractions. Test data managers. All sensible decisions when they were made.
But over time, that framework becomes its own product — and maintaining it starts competing with actually testing.
A UI change breaks a dozen tests. A timing issue that worked last sprint now doesn't. Someone new joins the team and takes weeks to understand how the layers connect.
Sound familiar?
I started asking myself: are we building automation, or are we building framework infrastructure that happens to run some tests?
Where Playwright genuinely changed the day-to-day
When I moved to Playwright on a real project — not a proof of concept, an actual delivery environment — a few things stood out quickly.
The auto-waiting was immediately noticeable. We had accepted flakiness as normal for so long that when it largely disappeared, the team almost didn't know what to do with the extra time.
The ability to combine API and UI flows in the same test, cleanly, without stitching together separate tools, opened up some genuinely useful patterns we hadn't been able to achieve before.
And parallel execution — something we'd always wanted but carefully managed in Selenium — became something we just... enabled.
I'm not saying Playwright is perfect or that migration is trivial. But when I think about the friction we'd normalised with older setups, the contrast was hard to ignore.
Where AI has actually made a difference — practically, not theoretically
This is the part I was most cautious about. There's a lot of noise around AI in testing right now, and I've been deliberately sceptical.
But here's what I've found actually works:
Drafting tests from requirements or user stories. Not production-ready tests — starting points. Removing the blank-page problem for repetitive test structures saves more time than it sounds.
Recovering broken tests faster. This was probably the biggest practical gain. AI-assisted analysis of failures — especially when the root cause is a locator shift or a structural change — significantly reduced the time from "pipeline is red" to "pipeline is green."
Thinking in terms of intent rather than implementation. This is harder to quantify but I think it's the most important shift. When you frame a test around what a user is trying to accomplish rather than which element to click, AI tools become genuinely useful collaborators. When you're still thinking in low-level selectors, they add less.
I'm still cautious. AI doesn't eliminate the need for engineering judgement — it amplifies it. In the wrong hands, it can produce fluent-looking automation that doesn't actually test anything meaningful.
What I think this means for the industry
I don't think Selenium disappears. There are real cases where it's the right answer — legacy environments, stable applications, teams with significant existing investment.
But I do think we're heading towards a world where:
- Most new automation work defaults to Playwright — not because of hype, but because it reduces the friction that used to be considered unavoidable.
- AI tools take over the repetitive, mechanical parts of test creation and maintenance — which is probably 40–50% of what many automation engineers currently spend their time on.
- The role of the test automation engineer shifts towards something more strategic: deciding what to test, designing how to test it efficiently, and making sure the automation is actually connected to risk and business outcomes — not just coverage numbers.
That last part, honestly, is where the seniority shows. Tools can write tests. They can't yet decide what's worth testing.
A practical thought if you're evaluating where to focus
If your team is still asking "how do we improve our Selenium framework?" — that's not always the wrong question. But it might be worth stepping back to ask a bigger one first: is the framework the constraint, or is it the way we're thinking about automation?
The teams I've seen move fastest right now are the ones that reduced framework complexity, adopted Playwright for new work, and started experimenting with AI assistance in low-risk areas — without trying to transform everything at once.
What's coming next on this blog
Now that I've broken a long silence, I'm planning to share more regularly — practical observations from real work, not generic hot takes.
Topics I'm thinking about: AI-assisted testing in practice (what works, what doesn't), API testing architecture, shifting left on performance testing, and where security testing fits into a modern automation strategy.
If any of those are relevant to where your team is right now, I'd be glad to hear what you're working through.
Thinking about your automation strategy, or wondering whether it's time to move on from your current setup? Get in touch — practical conversations about real problems are what we're here for.