Skip to content

Instantly share code, notes, and snippets.

Avatar
🌎
Evolution through Education

Artem Sychov suchov

🌎
Evolution through Education
View GitHub Profile
@suchov
suchov / refactoring with tests
Created Apr 12, 2017
Testing Implementation Details
View refactoring with tests
One metric of a solid test suite is that you shouldn’t have to modify your tests when
refactoring production code. If your tests know too much about the implementation
of your code, your production and test code will be highly coupled, and even
minor changes in your production code will require reciprocal changes in the test
suite. When you find yourself refactoring your test suite alongside a refactoring
of your production code, it’s likely you’ve tested too many implementation details
of your code. At this point, your tests have begun to slow rather than assist in
refactoring.
The solution to this problem is to favor testing behavior over implementation.
@suchov
suchov / Helper Methods
Created Apr 12, 2017
Extracting Helper Methods
View Helper Methods
Common helper methods should be extracted to `spec/support`, where they can be
organized by utility and automatically included into a specific subset of the tests.
# spec/support/kaminari_helper.rb
module KaminariHelper
def with_kaminari_per_page(value, &block)
old_value = Kaminari.config.default_per_page
Kaminari.config.default_per_page = value
block.call
ensure
View Duplication
Test code can fall victim to many of the same traps as production code. One of the
worst offenders is duplication. Those who don’t recognize this slowly see productivity
drop as it becomes necessary to modify multiple tests with small changes to the production codebase.
Just like you refactor your production code, you should refactor test code, lest it
become a burden. In fact, refactoring tests should be handled at the same time as
refactoring production code — during the refactoring step in Red, Green, Refactor.
You can use all the tools you use in object oriented programming to DRY up duplicate
test code, such as extracting to methods and classes. For feature specs, you may consider using
@suchov
suchov / Intermittent Failures
Created Apr 12, 2017
Intermittent Failures
View Intermittent Failures
Intermittent test failures are one of the hardest kinds of bug to find. Before you
can fix a bug, you need to know why it is happening, and if the bug manifests itself
at seemingly random intervals, this can be especially difficult. Intermittent failures
can happen for a lot of reasons, typically due to time or from tests affecting other tests.
We usually advise running your tests in a random order. The goal of this is to
make it easy to tell when tests are being impacted by other tests. If your tests
aren’t cleaning up after themselves, then they may cause failures in other tests,
intermittently depending on the order the tests happen to be run in. When this
happens, the best way to start diagnosing is to rerun the tests using the `seed` of
View Delete tests
Sometimes, a test isn’t worth it. There are always tradeoffs, and if you have a
particularly slow test that is testing a non-mission critical feature, or a feature that
is unlikely to break, maybe it’s time to throw the test out if it prevents you from
running the suite.
@suchov
suchov / external APIs
Created Apr 12, 2017
Don’t hit external APIs
View external APIs
External APIs are slow and unreliable. Furthermore, you can’t access them with-
out an internet connection and many APIs have rate limits. To avoid all these
problems, you should not be hitting external APIs in the test environment.
@suchov
suchov / feature are slow
Created Apr 12, 2017
Move sad paths out of feature specs
View feature are slow
Feature specs are slow. They have to boot up a fake browser and navigate around.
They’re particularly slow when using a JavaScript driver which incurs even more
overhead. While you do want a feature spec to cover every user facing feature of
your application, you also don’t want to duplicate coverage.
Many times, feature specs are written to cover both happy paths and sad paths. In
an attempt to mitigate duplicate code coverage with slower tests, we’ll often write
our happy path tests with feature specs, and sad paths with some other medium,
such as request specs or view specs. Finding a balance between too many and too
few feature specs comes with experience.
@suchov
suchov / NO excessive database interaction
Created Apr 12, 2017
Only persist what is necessary
View NO excessive database interaction
Persisting to the database takes far longer than initializing objects in memory, and
while we’re talking fractions of a second, each of these round trips to the database
adds up when running your entire suite.
When you initialize new objects, try to do so with the least overhead. Depending
on what you need, you should choose your initialization method in this order:
Object.new - initializes the object without FactoryGirl. Use this when you
don’t care about any validations or default values.
FactoryGirl.build_stubbed(:object)-initializestheobjectwithFactoryGirl,
@suchov
suchov / spring
Last active Apr 12, 2017
Use an application preloader
View spring
Rails 4.1 introduced another default feature that reduces some of the time it takes to load Rails.
The feature is bundled in a gem called Spring https://github.com/rails/spring ,
and classifies itself as an application preloader.
View gist:52ca59d9005db68021bef8d5c1662539
Rspec-rails 3.0 introduced multiple default spec helpers.
When you initialize a Rails app with RSpec, it creates a `rails_helper.rb` which loads Rails and a `spec_helper.rb`
which doesn’t. When you don’t need Rails, or any of its dependencies, require your `spec_helper.rb`
for a modest time savings.