2016-05-18 breakout by Vaz
This breakout covered:
- Using
RACK_ENV
in the conventional way to use a different db for development than for tests - Adding RSpec to the skeleton with a (very) basic model spec
- Introduction to acceptance testing and Capybara, focusing on form submission for creating new records
Brief note on environment variables: these are simple variables with string values that are inherited whenever a
new process is spawned (on a UNIXy system). Often they're used to configure how a program will work by
specifying settings when the command is launched. For example: RACK_ENV=test bundle exec rake db:create
will set the RACK_ENV
environment variable for that process.
RACK_ENV
is a variable that's recognized by Rack-based web apps that indicates whether it's being
run in development mode, production mode, or for testing. For example, in development mode we might
log more debugging messages and run less securely, and in test mode we might wipe the database
very frequently. Sinatra, being Rack-based, also recognizes this environment variable (for example,
giving convenience methods like Sinatra.development?
).
We showed how using the RACK_ENV
variable we could dynamically specify the database file so
each environment has its own database.
We also updated the Rakefile to recognize this. In fact we reverted to the default Rake task definitions because they support this by default.
Note that in order to have a test database to work with, we needed to create and migrate it:
$ RACK_ENV=test bundle exec rake db:create
$ RACK_ENV=test bundle exec rake db:migrate
Also note that development
is the default environment anyway if none is set.
We added the gems rspec
, capybara
, and database_cleaner
and set up a spec_helper.rb
to
configure our test environment. See the commit here.
A couple of things to note:
config/environment.rb
is the entry point for defining this web app. The spec helper requires this,
but first it defaults the RACK_ENV
var to test
so we don't have to specify that when we run rspec:
# in spec/spec_helper.rb
ENV['RACK_ENV'] ||= 'test'
require_relative '../config/environment'
# ...
The RSpec configuration block does two things: it adds methods from Capybara so they're available to use in our tests, and sets up DatabaseCleaner to truncate (drop all rows) after each test so they are isolated and repeatable.
We also created an example model spec so we remember what unit tests look like.
First, to get the hang of it, we added a very basic acceptance test for the homepage:
when visiting the root ('/'
) path, we should expect to see the text "Home Page" on
the page. Very basic but it shows the essential pattern: visit a page, potentially interact
with it the way a user would, and verify what's "on-screen".
Then we worked on a more substantial topic: visiting a page with a form to add a new Album, filling out the form with both valid and invalid data, and verifying the results.
See the commit here... this one's bigger.
Focus on spec/acceptance/album_spec.rb
first and then how the rest of the commit allows
those specs to pass.
I used these a few times in the committed example. The whole describe block
has a subject of page
:
describe 'Albums scenarios' do
subject { page }
# ...
end
First, page
is a method added by Capybara that always returns an object representing whatever
the page (HTML DOM document) looks like at the current moment--that is, it is changed during
the course of the tests by interacting with it, and it is used to verify results.
Specifying a subject in RSpec allows for writing some succinct tests. When the subject is page
,
these are equivalent:
it 'should preserve the submitted form data' do
expect(page).to have_field('record_label', with: album_record_label)
end
and
it { should have_field('record_label', with: album_record_label) }
...with the exception that the first has a custom documentation string,
which you can see when you run rspec with --format documentation
,
and is generally more readable.
Another thing I wanted to emphasize is that simpler tests are often better. You don't want tests to start failing in the future because you changed something inconsequential. It will happen sometimes, but you can minimize it by not being overly-specific in your tests. For example, when viewing the index page when there are no Albums in the database:
it { should_not have_selector('ul.albums li') }
This is more than specific enough. Avoid the temptation to check for too many long, exact strings of text, deeply nested elements or other things that you might change later without breaking the behaviour of the page (from a user's perspective).
I did use:
it { should have_selector('.errors', text: "Title can't be blank") }
which could probably just have checked for text: /title/i
and been
an equally effective, but less fragile test.
We covered this topic while working on the tests. It's this pattern:
# in app/actions.rb
get '/albums/new' do
@album = Album.new
erb :'albums/new'
end
post '/albums' do
@album = Album.new(params.slice('title', 'record_label', 'release_date'))
if @album.save
redirect to('/albums')
else
erb :'albums/new'
end
end
And the view:
<!-- in app/views/albums/new.erb -->
<h1>
New Album
</h1>
<% if @album.errors.any? %>
<ul class="errors">
<% @album.errors.full_messages.each do |error| %>
<li><%= error %></li>
<% end %>
</ul>
<% end %>
<form action="/albums" method="post">
<input name="title" value="<%= @album.title %>">
<input name="record_label" value="<%= @album.record_label %>">
<input type="date" name="release_date" value="<%= @album.release_date %>">
<input type="submit" value="Create">
</form>
The pattern is meant to allow the same view to be rendered when the
form is first loaded (@album
is an Album instance, but it is unsaved
and its attributes are empty, so the fields are empty), and in case
the object fails to save, to re-render the form with the values that
were submitted (@album
is the invalid instance, with attributes
from the form submission, so they are filled in and the user doesn't
have to type them again).
We also conditionally show errors, which would exist if the view is being rendered after a failed submission.
Our acceptance tests verify this behaviour.
The default Capybara "web driver" (what interacts with the rendered document)
is the rack_test
driver, and it doesn't
handle running javascript at all, so if you're trying to verify results that depend
on javascript altering the page, you need to use another driver (like Poltergeist).
I didn't cover this as it's more advanced (see the docs) but it's worth noting here.