SNK
SNK Digital
Back to Blog
Testing7 min read

Test Automation Strategy: Where to Start and What to Automate

A practical guide to building a test automation strategy that delivers value. Learn what to automate, what to leave manual, and how to prioritise your automation efforts.

Test Automation Strategy: Where to Start and What to Automate

Every software team wants faster releases with fewer bugs. Test automation promises both. But after 25 years in the industry, I've seen more automation projects fail than succeed—not because automation doesn't work, but because teams automate the wrong things in the wrong order.

This guide will help you build a test automation strategy that delivers real value, starting from wherever you are now.


The Automation Pyramid: Still Relevant

You've probably seen the test automation pyramid: lots of unit tests at the bottom, fewer integration tests in the middle, and a small number of UI tests at the top. It's old advice, but it's old because it works.

        /\
       /  \     UI Tests (few)
      /----\
     /      \   Integration Tests (some)
    /--------\
   /          \  Unit Tests (many)
  /____________\

Why this shape matters:

  • Unit tests are fast, reliable, and cheap to maintain. They catch bugs early.
  • Integration tests verify that components work together. They're slower but catch different bugs.
  • UI tests simulate real user behaviour. They're slowest and most fragile, but they test what users actually experience.

Most struggling automation suites are inverted pyramids—heavy on UI tests, light on unit tests. That's expensive and fragile.


What to Automate (And What Not To)

Not every test should be automated. Here's how to decide:

Automate These

Repetitive regression tests. Tests you run every release that check existing functionality still works. These pay back quickly.

Smoke tests. Quick checks that critical paths work after deployment. "Can users log in? Can they complete a purchase?"

Data-driven tests. Same logic, many inputs. Testing a form with 50 different input combinations is tedious manually, trivial when automated.

Tests requiring precision. Calculations, data transformations, anything where humans might make mistakes checking results.

Tests crossing system boundaries. API contracts, database integrity, integration points—these are hard to test thoroughly by hand.

Keep These Manual

Exploratory testing. Creative testing where you're discovering how the system behaves. Automation can't be curious.

Usability testing. Does this feel right? Is it intuitive? These require human judgment.

New features under active development. Automating tests for code that changes daily wastes effort. Wait until it stabilises.

One-time tests. If you'll only run it once, manual is faster.

Tests requiring complex setup. If test data setup takes longer to automate than the test itself, question the value.


Prioritising Your Automation Effort

You can't automate everything at once. Prioritise ruthlessly.

The Priority Matrix

Score each potential test automation on two factors:

Business impact: What's the cost if this fails in production?

  • High: Core revenue paths, security, data integrity
  • Medium: Important features with workarounds available
  • Low: Nice-to-have features, cosmetic issues

Stability: How often does this part of the application change?

  • High stability: Mature features, core logic
  • Medium stability: Features that get occasional updates
  • Low stability: Actively developing features
High StabilityMedium StabilityLow Stability
High ImpactAutomate firstAutomate secondWait
Medium ImpactAutomate thirdConsider carefullySkip
Low ImpactConsiderProbably skipSkip

Where to Start: A Practical Order

  1. API/Integration tests for core business logic. The payment processing, order management, user authentication. High impact, typically stable, not affected by UI changes.

  2. Smoke tests for critical user paths. Login, primary feature access, key transactions. Run these after every deployment.

  3. Data validation tests. Reports show correct numbers? Calculations accurate? These catch costly errors.

  4. UI tests for stable, high-value features. Once you have API coverage, add selective UI tests for the most important user journeys.


Choosing Your Tools

The "best" tool depends on your tech stack and team skills. Here's a practical guide:

For Web UI Testing

Playwright. My current recommendation for most teams. Works across browsers, excellent developer experience, stable selectors, built-in features reduce boilerplate.

Cypress. Excellent for JavaScript-heavy applications. Slightly easier learning curve than Playwright, but browser support is more limited.

Selenium. The veteran. Supports everything, but more setup and maintenance. Consider if you need very specific browser/driver combinations.

For API Testing

Postman/Newman. Great for getting started. Collections are easy to share and maintain. Newman runs them in CI/CD.

REST Assured (Java) / Requests (Python). When you need tests as code with full programming language power.

Playwright or Cypress. Both can make API calls. If you're already using them for UI tests, keep things simple.

For Mobile Testing

Appium. Cross-platform, works with native, hybrid, and mobile web. Complex setup but widely supported.

Detox (React Native). If you're in the React Native world, Detox offers better performance and reliability.

XCUITest / Espresso. Native solutions for iOS and Android respectively. Best performance, but platform-specific.

For Unit Testing

Use whatever's standard for your language: Jest/Vitest (JavaScript), pytest (Python), JUnit (Java), etc. Don't overthink this—pick one and start writing tests.


Building the Automation Framework

A sustainable automation framework needs structure. Here are the essential components:

Page Objects or Component Models

Abstract UI interactions away from test logic. When the UI changes, you update one place, not 50 tests.

// Instead of this in every test:
await page.click('#submit-button');

// Use a page object:
await checkoutPage.submitOrder();

Test Data Management

Tests need data. Options:

  • Factory functions: Generate test data programmatically
  • Fixtures: Predefined datasets loaded before tests
  • API setup: Create data via API before UI tests run

The key: tests should create what they need and clean up after themselves. Don't rely on shared test data that other tests might modify.

Configuration Management

Environments, credentials, feature flags—keep these in configuration, not hardcoded. Tests should run in dev, staging, and production (read-only) without code changes.

Reporting and Logging

When tests fail, you need to know why quickly:

  • Clear failure messages
  • Screenshots on failure (UI tests)
  • API request/response logs
  • Video recordings for complex failures

Integrating with CI/CD

Automated tests that don't run automatically aren't delivering value.

Where to Run What

On every commit: Unit tests, fast API tests. Must complete in minutes.

On pull request: Full API suite, smoke UI tests. Aim for under 15 minutes.

Nightly: Full test suite including slow UI tests, performance tests, security scans.

On deployment: Smoke tests against the live environment.

Handling Flaky Tests

Flaky tests—tests that pass sometimes and fail sometimes—destroy trust in automation. When tests are flaky:

  1. Quarantine immediately. Move flaky tests out of the main suite.
  2. Fix or delete. A flaky test is worse than no test—it wastes time investigating false failures.
  3. Find the root cause. Usually: timing issues, shared state, external dependencies.

Measuring Success

Track these metrics to understand if your automation investment is paying off:

Test coverage increase. Are you covering more critical paths?

Test execution time. How long does the full suite take? Is it getting slower?

Defect escape rate. Are bugs caught in testing or reaching production?

Time to confident deployment. How quickly can you verify a release is good?

Maintenance effort. How much time does the team spend fixing broken tests vs. writing new ones?

Healthy ratios: maintenance should be less than 20% of your automation effort. If you're spending more time fixing tests than adding value, reassess your approach.


The Bottom Line

Test automation is an investment. Like any investment, the returns depend on putting your resources in the right places.

Start with high-impact, stable areas. Build a solid foundation of unit and API tests. Add UI tests selectively for critical user paths. Integrate with CI/CD so tests run automatically. And measure your results to prove the value.

The goal isn't 100% automation—it's the right automation for your context.


Need help building or improving your test automation strategy? Contact us for a practical assessment of your current state and a roadmap forward.

#testing#automation#qa#software quality#test strategy