Skip to content

Instantly share code, notes, and snippets.

@klamping
Last active November 6, 2023 21:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save klamping/4e9f19fb1a2b73db814d10f43a76b5d7 to your computer and use it in GitHub Desktop.
Save klamping/4e9f19fb1a2b73db814d10f43a76b5d7 to your computer and use it in GitHub Desktop.
UI Testing Issues

What makes UI Testing complicated

  • Flaky data
    • Makes debugging incredibly tough when tests pass intermittently
    • Mocking exists, but it's tricky to implement because most data is server-side and you may not have access to that (even if you can change it, it's technically complex)
      • Integrating Server-side Mocking into CICD is complex
  • Loose-coupling between HTML and Selectors (leading to false failing tests/maintanence nightmares)
  • You either have tests that don't prove anything, or tests that are always breaking
    • 3rd-party services that you need to mock, but also need to validate work
    • How do you prove that the right value was returned without either mocking it out or tight-coupling with the API/DB?
  • All the environments in the world
    • Server environments and syncing data between them
    • User environments (browsers, devices, etc)
  • Tests that pretty much cover all the levels of the stack
    • UI
    • Backend
    • Database
    • Even a simple text check can be testing all levels of the site
  • When you consider edge cases, there's an unlimited number of test cases to be written
    • Writing tests for edge cases is difficult from a fixture setup perspective (have to get starting data just right to reproduce test case)
  • Guessing game as to what's likely to break, what deserves a test vs doesn't
  • Testing tools that are either too advanced for most to use, or too basic to be scalable/useful
  • Most tools are agnostic of a company's specific tech stack
    • Things that cause variations
      • Front-end language/framework
      • Back-end language/framework
      • Database
      • CI/CD system
      • 3rd Party Testing System (e.g., Sauce Labs)
      • Deployment environment
    • Half the battle is just getting the tool to work within your specific stack
  • Devs are not trained to be QA and QA are not trained to be devs
    • Devs know how to code tests, but don't know what to test
    • QA knows what to test, but now how to code the tests
    • You need both in order to be successful
  • Team dynamics create disconnect between QA and Developers/others
    • Project managers/Product owners own the user stories, so devs need to work with them and change their process to really integrate in to the process.
  • Developers are left on their own to solve all this
    • Many don't get to the core issues because that requires knowledge/expertise outside of their skillset and they don't want to cause any trouble for others
  • Tests are treated as lower-class citizens
    • They're "nice to have", not essential to the dev pipeline.
    • If the test suite causes friction, teams will opt to start ignoring the tests versus fixing the friction

What Companies Need

  • A clear plan focused on the right outcomes
  • Value delivered in loss prevention/productivity improvements
  • A way to evaluate a full technology stack and understand what/why things are breaking, or what's most at risk of breaking
  • To get out of their own way. As a consultant, I've been hired to write tests and "help", but not given the authority to actually fix issues beyond failing tests (Note: I should be more proactive about asking for this)
    • Powers needed
      • Ability to purchase tools
      • Ability to integrate tools into the SDLC
      • Ability to shift culture around testing
        • Prioritize testing as a discipline for developers
      • Ability to hire

How to solve this?

  • Can we draw parallels between other industries and the tech industry
    • Airlline pilots aren't expected to fly a plane on their first day on the job. They get lots of training to understand the specific plane
  • What tech companies are successful at preventing bugs/outtages?
  • Focus on the money makers of a company. That's where the value lives
  • Determine what in a web app is most likely to break
    • Design
    • Form functionality
    • 3rd-party integrations
    • Edge cases
      • Functionality that's common for users to do, but not for devs/testers to do
        • Combine feature x + y to do z
        • Devs/qa tests feature x and y separately, and never do z
  • Use analytics/logging to identify issues not covered by tests
  • Can we get the users to do the testing?
    • Scrape interactions from production, use for testing in QA
      • Gather information about user (browser, device, cookies, etc)
      • Gather all the actions that user took
      • Build a test case from those actions
      • Problems
        • Needs to have the same data to be reproducible
          • Can system identify components of a website?
  • A repeatable process at accelerating development
    • You can't solve it all at once
    • Identify a risk

Presentation

"You can't test on an island/in a silo" "UI Testing is Bad/Broken" "Cautionary Tales in UI Testing" "50 Creative and Fun Ways to Lose Your Mind Testing"

Description: Automated UI Testing is promised as a panacea to all your regression testing woes. While powerful, unfortunately UI Testing hides dirty little secrets. Instead of accelerating software development and preventing costly bugs, most UI testing suites are endless distractions, constantly breaking and never really doing what they're designed to do: catch costly bugs.

In this presentation, I'll talk about my experiences in Automated UI testing over the past several years, covering the pitfalls I've seen team after team fall into. Finally, I'll talk strategy in how you can actually get use out of UI Testing, as it's not all woe and worries.

This talk will cover

  • What makes UI Testing difficult

  • Why teams get tricked into UI testing mistakes

  • Why tools like Storybook are extremely helpful

  • How you can be successful with UI Testing

  • Famous software bugs

    • 747-Max issue
    • Boeing Starliner time issue
    • Mars probe that got imperial/metric wrong
  • What's the purpose of testing?

    • To accelerate software development
    • To prevent costly bugs
  • What testing usually does

    • Distract/impede developers
    • Cause lots of false failures
    • Gets ignored
    • Make for good hackathon presentations
  • What testing is sold as vs the reality

    • Sold as: A slightly complex practice that will prevent a majority of bugs
    • Reality: An incredibly complex practice that might prevent a few bugs
  • Why is it so complex?

    • Go into the full list of what makes UI Testing complicated
      • Specifically, the test looks small, but is actually validating a ton of stuff
      • "When I turn the car on, does the engine make sound?"
        • Seems simple, because you're only testing the "sound", but so much must happen for it to make a sound
      • Don't confuse the simplicity of what you're testing with the complexity of what's needed to test it
  • Question the audience

    • Who are you?
      • Devs, QA, PMs...
      • How many CTOs in the audience?
    • Most of us are here because we're interested in the topic
      • Or, if lots of CTOs... describe the plight of the software dev/QA trying to fix this
  • Most of us here can't solve this on our own

    • Talk about how difficult it is to do this
    • Like trying to steer a battleship with a paddle
    • We want to stay in a comfortable place, for a multitude of reasons
      • We're still inexperienced
      • We don't have connections
      • It's our job on the line if we screw up/cause trouble
    • So, we create proof-of-concepts or do a little bit of work to get systems working, then just hope everyone will jump on the train of awesomeness
      • If we're lucky, we get to build out about 50-100 tests that are flaky, slow things down and are rarely useful
  • So how do we solve it?

    • Go big or go small
      • Either do it for yourself, to improve your ability to deliver
      • Or do it as a career move, to improve the QA discipline
    • Either:
      • Accept the career risk with potential benefits, or...
      • Reject the career risk and settle for reality and limits
    • "Well that's good and all, but how do I actually do this?"
    • Big
      • Focus on the money makers of a company. That's where the value lives
      • Have talks with stakeholders, even if you don't know them
        • Identify what keeps them up at night
      • Gain power
        • Talk about powers needed to fix it
      • Use analytics/logging to identify issues not covered by tests
      • That's it. I've never done this, so don't take my advice
    • Small
      • Let go of grand aspirations, it's not going to happen
      • Identify your personal pain points
      • Focus on testing to improve your productivity, not on testing to catch bugs outside of your responsibilities
      • Don't try and sell testing to others. If they're interested in learning more, share you experience, but it's not your job to convince others
      • Use component testing/tools like Storybook to circumvent the need for external systems/external teams
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment