Skip to content

Instantly share code, notes, and snippets.

@MACSkeptic
Created May 29, 2017 14:46
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save MACSkeptic/731f343c63d8dd430a5edb70c27bbae6 to your computer and use it in GitHub Desktop.
Save MACSkeptic/731f343c63d8dd430a5edb70c27bbae6 to your computer and use it in GitHub Desktop.
ui tests

UI Tests == DRAFT (2017-05-29) ==

Changelog

Definition (disambiguation)

  • UI Tests use a real browser to perform actions as close as possible to how real users would perform them.

Goals

  • Verify that no breaking changes have been introduced into critical user journeys (touching as many real resources as possible).
  • Make "dangerous" changes (e.g.: library updates) less so.
  • Test the absolute minimum possible through the UI to have the desired level of confidence, a UI test should always be the last option to verify something.

Problems (today)

  • Time... currently those block even deploy to dev, which makes long builds impossible to work with
    • Abuse of UI tests for API/service specific tests (e.g.: testing enrollment on SFT)
    • Serial execution of UI tests (slow by nature) contribute to a very long build time overall
  • Over reliance on docker-compose setup and local databases/services reduces fidelity compared to using a real AWS environment
  • Impossibility to use anything that involves internal services (via proxy - read: the entire racker UI) due to running on circleCI
  • Impossibility to track trends of failure/duration of tests over time
  • Selenium is hard and unstable, forever was, likely forever will be

Desired Outcome

  • Every UI test runs on every version that will get merged to master (blocking the merge)
  • UI tests can but not necessarily must run on every commit
  • Developers must not wait more than 10 minutes to be able to deploy to dev
  • Test absolutely nothing through the UI that couldn't be verified through other means (i.e.: prefer API tests, integration tests, unit tests)
  • UI tests run against a real AWS environment, no docker-compose, no local database, no local services
  • UI tests must not block the dev environment, it must be kept as a resource for manual verification
  • UI tests should be run in parallel
  • The UI tests run itself must not be longer than 30 minutes (measured by the slowest branching path for a parallel execution)

Proposed approach

  • "Build" (ECS task revision creation) janus-ui immediately after unit tests (could even remain in circle for now)
  • Create a infrastructure setup for a new ECS service under the "janus dev" AWS Account that would have the UI deployed for automated UI testing
  • Create a UI Tests job in jenkins that:
    • Locks the infrastructure allocated for UI tests
    • Enables a specific ECS task revision and waits for the service to switch (might want to set this one to just rotate immediately)
    • Runs the UI tests against the new endpoint (e.g.: ui-tests-dev.manage.rackspace.com)
    • Blocks the merge to janus-ui
    • Is triggerable via "button click" (MVP, we can think of something better for this later)

Challenges / Unsolved Issues

  • Data management:
    • For parallel tests, resources must be either asserted from readonly accounts or created with a "tag" (sha, timestamp, etc) to identify them as belonging to specific test run slice
    • We might want to run a data cleanup routine (to catch straggler data - e.g.: if the build crashes and does not clean after itself) before each test run
    • How to handle things that cannot be undone? Biggest problem here is "create AWS Account" - what's the solution for that? should we mock?
  • Test suite health over time:
    • Is a specific test trending towards becoming more flaky?
    • Is a sepcific test trending towards becoming slower?
    • Should we emit/capture metrics to visualize those?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment