Skip to content

Instantly share code, notes, and snippets.

@tilacog
Last active April 23, 2024 06:42
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save tilacog/cfdcc940e169a07aad0a to your computer and use it in GitHub Desktop.
Save tilacog/cfdcc940e169a07aad0a to your computer and use it in GitHub Desktop.

General TTD Advice

  • If you can't test the code efficiently, refactor the code.
  • Don't hardcode URLs in views, templates and tests. (Use revert instead?)
  • Use named views and reverse URL resolution, instead.
  • Never refactor against failing UNIT tests.
  • Don't forget the REFACTOR on 'Red, Green, Refactor'.
  • Tests makes possible using the application state as a save-points for refactors. Once you get to them again, you'll know your refactoring is done.
  • Every single FT doesn't need to test every single part of your application, but use caution when de-duplicating your FTs. (FTs exist to catch unpredictable interactions between different parts of your application, after all)
  • Use loggers named after the module you're in. Follow the logging.getLogger(__filename__) pattern to get a logger that's unique to your module, but that inherits from a top-level configuration you control.
  • Keep Old Integrated Test Suites Around as a Sanity Check ...
  • use the TestCase.addCleanup (link) method as an alternative to the tearDown function as a way of cleaning up resources used during the test. It's most useful when the resource is only allocated halfway through a test, so you don't have to spend time in tearDown figuring out what does or doesn't need cleaning up.

Test Isolation

  • Unit tests
    • Create a small "unit" of code.
    • Units include as few branches (layers?) as possible.
  • Integration tests
    • Tests the contracts between your units.
  • Functional != Integrated != Isolated ("mocky")
  • Use unittest.TestCase when testing for isolation. Django's TestCase might do undesired integration (magic) in the background
  • When isolating tests, we should mock every external-layer object that isn't the focus/subject of the test. For example:
    • When testing views, we mock out forms, django.shortcuts functions.
    • When testing forms, we mock out models.
    • When testing X, mock Y and Z if f(X) depends on (Y, Z)
  • Outside-In TDD: A methodology for building code, driven by tests, which proceeds by starting from the “outside” layers (presentation, GUI), and moving “inwards” step by step, via view/controller layers, down towards the model layer. The idea is to drive the design of your code from the use to which it is going to be put, rather than trying to anticipate requirements from the ground up.
  • Programming by wishful thinking:The outside-in process is sometimes called “programming by wishful thinking”. Actually, any kind of TDD involves some wishful thinking. We’re always writing tests for things that don’t exist yet.
  • The pitfalls of outside-in: Outside-In isn’t a silver bullet. It encourages us to focus on things that are immediately visible to the user, but it won’t automatically remind us to write other critical tests that are less user-visible, things like security for example. You’ll need to remember them yourself.
  • At the Models layer, we no longer need to write isolated tests—the whole point of the models layer is to integrate with the database, so it’s appropriate to write integrated tests
  • When doing outside-in TDD with isolated tests, you need to keep track of each test’s implicit assumptions about the contract which the next layer should implement, and remember to test each of those in turn later. You could use our scratchpad for this, or create a placeholder test with a self.fail
  • Writing isolated tests does make you very aware of where the complexities in your code lie.
  • There may even be a case for building them [integrated tests] as a separate test suite—you could have one suite of fast, isolated unit tests that don’t even use manage.py , because they don’t need any of the database cleanup and teardown that the Django test runner gives you, and then the intermediate layer that uses Django, and finally the functional tests layer that, say, talks to a staging server. It may be worth it if each layer delivers incremental benefits.

Things to test

Views: Viewing of data, changing of data, and custom class-based view methods.

Models: Creating/updating/deleting of models, model methods, model manager methods.

Forms: Form methods, clean() methods, and custom elds.

Validators: Really dig in and write multiple test methods against each custom validator you write. Pretend you are a malignant intruder attempting to damage the data in the site.

Signals: Since they act at a distance, signals can cause grief especially if you lack tests on them.

Filters: Since lters are essentially just functions accepting one or two arguments, writing tests for them should be easy.

Template Tags: Since template tags can do anything and can even accept template context, writing tests often becomes much more challenging. is means you really need to test them, since otherwise you may run into edge cases. Miscellany: Context processors, middleware, email, and anything else not covered in this list.

  • Each test should test one thing only
  • Each user story should have one functional test file.
  • Each tested source file should have one unit test file (models, views and forms).
  • Check the template used. Then, check each item in the template context.
  • Check any objects are the right ones, or querysets have the correct items.
  • Check any forms are of the correct class.
  • Test all template logic: any for or if should get a minimal test.
  • For views that handle POST requests, make sure you test both the valid case and the invalid case.
  • Sanity-check that your form is rendered, and its errors are displayed.
  • Test important log messages. If a log message is important enough to keep in your codebase, it's probably important enough to test. Using patch.object context manager on the logger of the module you're testing is one convenient way of unit testing it.
  • Have at least a placeholder test for every class and funciton. (placeholder test = failing test, to be defined in the future)

Third-party integrations

  • Don’t test other people’s code or APIs.
  • But, test that you’ve integrated them correctly into your own code.
  • Check that everything works from the point of view of the user.
  • Test that your system degrades gracefully if the third party is down.
  • This test (object belongs to user, page 330) uses the raw view function, and manually constructs an HttpRequest because it’s slightly easier to write the test that way. Although the Django test client does have a helper function called login , it doesn’t work well with external authentication services.

Django Views

  • Thin Views: If views get too complex, consider moving its logic to a form, or a custom method on the model class.
  • Use the same view to process POST requests and to render the form that they come from.
  • Use django forms for validation.
  • Django have a built-in logout view
  • When Users are not logged in, Django represents them using a class called AnonymousUser, whose .is_authenticated() always returns False.

Django Test Suite

  • Use the Django test client for integrated unittests.
  • self.client.get(url)
  • self.client.post(url, data={})
  • self.assertTemplateUsed(response, template-namestring)
  • self.assertRedirects(response, url)
  • self.assert(Not)?Contains(response, text)
  • self.assertMultiLineEqual(text1, text2) # diff like output.
  • self.assertRaises(ErrorObj) as a context manager.
  • from django.contrib.staticfiles.testing import StaticLiveServerTestCase
  • django.shortcuts.redirect accept three types of arguments:
    1. a model instance with .get_absolute_url() method
    2. a view name as a string, with optional arguments
    3. a url (absolute or relative)

Django Templates

  • Template tags/variables can be used inside <script> tags.
  • {% csrf_token %} renders a full html tag, while {{ csrf_token }} renders only a string.

Django Models

  • model.objects.get(param=value) # gets 1 obj
  • model.objects.filter(param=value) # gets all matching objs
  • model.objects.create(kwargs) # creates and saves obj
  • model.full_clean() # validates object attributes
  • model.save() # persist obj in db
  • The @property decorator transforms a method on a class to make it appear to the outside world like an attribute.
  • Use in-memory (unsaved) model objects in your tests whenever you can, it makes your tests faster.
  • Don't do: Model.objects.create(name='john')
  • Do this instad: Model(name='john')

Selenium

  • Consider Splinter as an alternative to Selenium.
  • API
  • browser.find_element_by_id('#id')
  • browser.find_element_by_tag_name('tag')
  • browser.find_element_by_css_selector('.class')
  • browser.find_element_by_link_text('Login')
  • browser.find_elements_by_[...] # plural, return list of elements
  • element.send_keys('abc')
  • element.send_keys('\n') # equals sending an ENTER
  • element.send_keys('abc\n') # equals typing abc and pressing enter
  • Use waits in Selenium tests.
  • implicitly_wait works reasonably well for any calls to any of the Selenium find_element_ calls, but it doesn’t apply to browser.current_url. Selenium doesn’t “wait” after you tell it to click an element, so what’s happened is that the browser hasn’t finished loading the new page yet, so current_url is still the old page.
  • (from a tooltip after definition of custom wait function, page 378) We’ve seen that Selenium provides WebdriverWait as a tool for doing waits, but it’s a little restrictive. This hand-rolled version lets us pass a function that does a unittest assertion, with all the benefits of the readable error messages that it gives us.
  • Sometimes Selenium might call a find_element_by_<...> over the page before the expected, and actually find a match. That tends to raise a StaleElementException.
  • Unexpected StaleElementException errors from Selenium often mean you have some kind of race condition. You should probably specify an explicit interaction/wait pattern.
  • Instead, it’s always prudent to follow the “wait-for” pattern whenever we want to check on the effects of an interaction that we’ve just triggered.

Testing Authentication, Login, etc.

  • It's convenient to isolate user authentication from other Functional Tests, specially when using 3rd-party API for that. To achieve this we need to "pre-authenticate" the test user.
    • It's easy to do that in the local test server (we have direct access to the test server and database).
    • It's not so simple to do that in production/staging.
  • Mockmyid.com can be used to mock Persona logins tests (Also consider Persona Test User)
  • A very common pattern in Selenium testing is waiting for something to happen, especially when testing external services (like Mozilla Personas). So, prefer the "wait-for-something" testing pattern when testing external services.
  • It's most likely that I'll need to define my own testing methods for explicitly wait for something to happen. [TDD w/ Python, p.253]

Deploy

  • Keep database, static and virtualenv alongside source dir.
  • git is invoked from the server to fetch latest source code.
  • Use git tag to tag the commit which has been deployed:
    • LIVE tag, for showing wich commit is on the live server;
    • DEPLOYED tag, containing the datetime of the deploy. Like this:
    $ git tag -f LIVE # needs the -f because we are replacing the old tag
    $ export TAG=`date +DEPLOYED-%FT%H:%M` # ISO 8601
    $ git tag $TAG
    $ git push -f origin LIVE $TAG
  • Keep a static files folder under project directory.
  • Use the STATICFILE_DIRS variable in settings.py.
  • Enhance the logging by using the LOGGING variable in settings.py.

Nginx

  • must be installed via apt-get.
  • must be started/reloaded as a service.
  • config files should be saved in /etc/nginx/sites-available and then symlinked to /etc/nginx/sites-enabled.
  • $sudo nginx -t will test the config files.
  • nginx will serve static files isntead of python/django/gunicorn.

Gunicorn

  • installed via pip.
  • must be pointed to the wsgi.py file.
  • must be upstarted.
  • must use sockets along with nginx.
  • script should be put in /etc/init/.
  • script file name must end in '.conf'.
  • $sudo [start|stop] script.conf to start/stop and reload script. (shouldn't I put this in the end of fabfile?)
  • must detail in the configuration file where to put access and error log files
  • must also redirect errors from std.err to the error file, like this:
    --error-logfile ../error.log 2>> ../error.log 
    
  • the command init-checkconf can be used to check the syntax of a .conf file. (Helps a lot!)

Fabric

  • Still on python2.
  • Shouldn't be on requirements.txt (it's a local system tool).
  • fabfile must be sourced.
  • Asks for usernames and passwords at runtime (no need to hardcode them in the fabfile).
  • Have issues with single-quoted strings. I've had problems when using fabric.sed, when fabric couldn't propperly escape a single quote when passing it to the sed command, so I had to chage from s/DOMAIN = 'localhost'/.../g to s/DOMAIN = "localhost"/.../g
  • Because our tests are Python 3, we can’t directly call our Fabric functions, which are Python 2. Instead, we have to do an extra hop and call the fab command as a new process, like we do from the command line when we do server deploys. (TDD book, page 314)
  • Can be used to run python manage.py <command> on the server.

Django Forms

  • process user input and validate.
  • used in templates to render html forms (inptus and errors).
  • can also save data on the db.
  • form.as_p() renders form as HTML.
  • form.is_valid() checks if form is valid before saving data, and also populates the errors attribute ({'field-name': 'error-msg'})
  • for long/complex forms customization, check:
    • django-crispy-forms
    • django-floppyforms
  • ModelForm: class which can auto-generate a form for a model.
  • Form constructor: form = ModelForm(data=request.POST)
  • After validation, form.errors will contain a list of validation errors.
  • forms.fieldname.errors will contain a list of errors for a given field.
  • form.instance represents the Model instance that is being modified or created.
  • form.save() can be overloaded to accept arguments (eg.:for helping the model creation).
  • Prefer ValidationError over IntegrityError, since the former uses validation logic before atempting to save the object, and the later comes directly as a rejection from the database.
  • So far, they're good for handling calls for object creation, since they have all the required info. Those methods/calls are better defined in the Model layer.

Unit Testing Javascript with QUnit

  • Also consider Jasmine and Karma.
  • It's a UNIT test suite, not a Functional/Integration/Acceptance test suite.
  • QUnit mainly expects you to “run” your tests using an actual web browser. This has the advantage that it’s easy to create some HTML fixtures that match the kind of HTML your site actually contains, for tests to run against.
  • Add a testing folder to my sourced static files, and place QUnit in there.
  • Create a (single) HTML file configured for tests (view QUnit site for specifics)
  • On the tests.html file import:
    • qunit.js
    • qunit.css
    • script-being-tested.js
  • Place script-being-tested.js in the root of app/static/ folder, so my template can invoke it like this:
    <!-- from django template -->
    <script src="/static/script-being-tested.js"></script>
    
    <!-- from QUnit test html file -->
    <script src="../script-being-tested.js"></script>
  • How does Javascript Testing integrates the TDD cycle?
    1. Write an FT and see it fail.
    2. Figure out what kind of code you need next: Python or JavaScript?
    3. Write a unit test in either language, and see it fail.
    4. Write some code in either language, and make the test pass.
    5. Rinse and repeat.

Mocking in Javascript

  • Use sinon.js
  • Mock 3rd-party API function calls.
  • Mock objects can tell if (and how much times) they were called. They can also specify all the arguments passed to them.
  • It's all about checking if my code called some specific functions with specific arguments. So, I must check:
    1. Did function f(x) was called?
    2. Did function f(x, y) was called with x as its first argument and y as its second?
  • Test function calls, when they should get called, and the arguments passed to them.
  • sinon.js can patch browser-wide Javascript objects, such as :
    • XMLHttpRequest, for handling and mointoring (observer pattern) all requests without actually transmitting them (sinon.FakeXMLHttpRequest();)
    • A fake server for handling predefined responses (status codes and such) (sinon.fakeServer.create();)
    • Note that those fake servers should be reset/restored between tests via setup/teardown calls.

PhantomJS

  • Must instal nodejs (from ppa rather than the Ubuntu repositories) and npm before.
  • Install via $ sudo npm install -g phantomjs.
  • Grabbed a runner.js file to make it work with QUint.
  • Installed version 1.9.7, since 1.9.8 (current at time of this writting) had an open issue/bug that made my tests fail. $ sudo npm install -g phantomjs@1.9.7

Spking

  • [Definition from C2/Wards Wiki] (http://c2.com/cgi/wiki?SpikeSolution)
  • Exploratory coding to find out about a new API, or to explore the feasibility of a new solution. Spiking can be done without tests. It’s a good idea to do your spike on a new branch, and go back to master when de-spiking.
  • Start a new branch and develop something "prototypal" without TDD, just to see if it works.
  • If it does work, create Functional Tests tests to ensure it.
  • Go back to the TDD branch, bringing only your spike Functional Tests (delete all spiked code).
  • Use the Functional Tests to guide your TDD again .
  • Now go on and write unit tests to make the FT's pass!
  • (BTW, "de-spiking" means rewriting your prototype code using TDD)

Mocking in Python

  • Using unittest.mock [link]
  • Common uses for Mock objects include:
    • Patching methods
    • Recording method calls on objects
  • You might want to replace a method on an object to check that it is called with the correct arguments by another part of the system.
  • the unittest.mock.patch decorator can mock any object from any module.
  • mock objects are truthy.
  • mock objects can mask errors.
  • Too many mocks are a code smell.
  • side_effect(unittest.mock.Mock.side_effect): A function to be called whenever the Mock is called.
  • Two common mistakes when using mock side-effects are: assigning the side effect too late, i.e. after you call the function under test, and forgetting to check that the side-effect function was actually called.
  • Use side_effect in tests to assert about the application state when it's mock is called (like a trap-bomb)
  • When testing for the return value of a patched object, we: (page 345, comment #3)
    1. grab the reference to the patched object, passed as an additional parameter to the test function; (let's call it mock_obj)
    2. make the patched object get called somehow on the original code, and grab its retrn value; (let's call it result)
    3. compare if result == mock_obj.return_value.
    4. Since result came from mock_obj(), and mock_obj() == mock_obj.return_value, we can assert if they are equal.
  • When it becomes too hard to mock or write a test for a test subject, it can be a sign that we need better API from lower-level constructs (like helper methods in Forms or in models)
  • "Fake it 'til you make it with mock"
  • Use Mock to emulate model instances.
    • set the attributes you need for testing directly. (inside the test method/case)
    • Use the spec argument to give guidelines (?)
    • No model/ORM overhead
    • Honda Civic example
  • Use mock.patch to focus your tests.
    • Alter the behaviour of code imported elsewhere.
    • Eliminate branches (colaborators?) you are not interested in testing.
  • Use mock in more complex situations.
    • mock.patch.multiple # Ainda estou tentando entender esse cara...
    • Track the way objects are used (Mock.assert_was_called, Mock.assert_called_with, ...)

Sessions and the Live Server

  • The intent of this test is set up a pre-authenticated session, using cookies. (TDD with Python, page 304)

  • We create a session object in the database. (The session key is the PK of the user object.)

  • We then add a cookie to the browser that matches the session on the server, so on our next visit to the site, the server should recognize us as a logged-in user.

  • Note tat, as it is, this will only work because FunctionalTest inherits from LiveServerTestCase, so the User and Session objects we create will end up in the same database as the test server.

  • Later we'll need to modify it so that it works against the datbase of the staging server too.

  • We've got a function that can create a session in the database. If we detect we're running locally, we call it directly. If we're against the server, there's a couple of hops: we use subprocess to get Fabrirc via fab, which lets us run a management command that calls that same function, on the server. (TDD book, page 316)

Custom Management Commands

  • The boilerplate in management command is a class that inherits from django.core.management.BaseCommand, and that defines a method called handle.
  • handle will pick up an email address as the first command-line argument, and then return the session key that we'll want to add to our browser cookies, and the management ommand prints it out athe the commnad line.
  • We need to add functional_tests (/source/functional_tests/management/commands/create_session.py) to our settings.py for it to reognise it is a real app that might have management commands as well as tests
  • We need to adjust a FT file so that it runs the local function when we're on the local server, and make it run the management command on the staging server if we're on that. (TDD book, page 312)

Jenkins

  • Follow instructions from the official wiki rather than installing via apt-get
  • Start the service with sudo /etc/init.d/jenkins start
  • Access Jenkins through www.domain.com:8080
  • It’s not a good idea to install Jenkins on the same server as our staging or production servers. Apart from anything else, we may want Jenkins to be able to reboot the staging server!
  • I would look into using headless browsers as a “dev-only” tool, to speed up the running of FTs on the developer’s machine, while the tests on the CI server use actual browsers.
  • Perhaps more interestingly, you can use your CI server to automate your staging tests as well as your normal functional tests. If all the FTs pass, you can add a build step that deploys the code to staging, and then reruns the FTs against that—automating one more step of the process, and ensuring that your staging server is automatically kept up to date with the latest code.
  • Can use the same deploy fabric script to deploy the staging server, but first I must change the following: (source)
    1. Change the jenkins user to myself
    • Open /ect/default/jenkins/
    • change the JENKINS_USER variable to whatever you want.
    • JENKINS_USER="tiago"
    • Make sure that user exists in the system and have a valid ssh-keys setup.
    1. Then change the ownership of the Jenkins home, Jenkins webroot and logs.
    • $ sudo chown -R tiago:tiago /var/lib/jenkins
    • $ sudo chown -R tiago:tiago /var/cache/jenkins
    • $ sudo chown -R tiago:tiago /var/log/jenkins
    1. Restart Jenkins and check the user has changed
    • $ sudo service jenkins restart
    • ps -ef | grep jenkins

Page Pattern

  • Reference
  • Involves having objects to represent different pages on your site, and to be the single place to store information about how to interact with them.
  • It’s usually best to have a separate file for each Page object.
  • The idea behind the Page pattern is that it should capture all the information about a particular page in your site, so that if, later, you want to go and make changes to that page—even just simple tweaks to its HTML layout for example—you have a single place to go and look for to adjust your functional tests, rather than having to dig through dozens of FTs.
  • You kind of "objectify" a page.
  • The Page Object Design Pattern provides the following advantages.
    1. There is clean separation between test code and page specific code such as locators (or their use if you’re using a UI map) and layout.
    2. There is single repository for the services or operations offered by the page rather than having these services scattered through out the tests.

Implementation Notes (link)

PageObjects can be thought of as facing in two directions simultaneously. Facing towards the developer of a test, they represent the services offered by a particular page. Facing away from the developer, they should be the only thing that has a deep knowledge of the structure of the HTML of a page (or part of a page) It's simplest to think of the methods on a Page Object as offering the "services" that a page offers rather than exposing the details and mechanics of the page. As an example, think of the inbox of any web-based email system. Amongst the services that it offers are typically the ability to compose a new email, to choose to read a single email, and to list the subject lines of the emails in the inbox. How these are implemented shouldn't matter to the test.

Because we're encouraging the developer of a test to try and think about the services that they're interacting with rather than the implementation, PageObjects should seldom expose the underlying WebDriver instance. To facilitate this, methods on the PageObject should return other PageObjects. This means that we can effectively model the user's journey through our application. It also means that should the way that pages relate to one another change (like when the login page asks the user to change their password the first time they log into a service, when it previously didn't do that) simply changing the appropriate method's signature will cause the tests to fail to compile. Put another way, we can tell which tests would fail without needing to run them when we change the relationship between pages and reflect this in the PageObjects.

  • The tests, not the PageObjects, should be responsible for making assertions about the state of a page.
  • A PageObject need not represent an entire page. It may represent a section that appears many times within a site or page, such as site navigation. The essential principle is that there is only one place in your test suite with knowledge of the structure of the HTML of a particular (part of a) page.
  • Summary:
    • The public methods represent the services that the page offers
    • Try not to expose the internals of the page
    • Generally don't make assertions
    • Methods return other PageObjects
    • Need not represent an entire page
    • Different results for the same action are modelled as different methods

Other references

What to expect from tests (Functional, Integrated, Isolated)

  • Correctness
    • Do I have enough functional tests to reassure myself that my application really works, from the point of view of the user?
    • Am I testing all the edge cases thoroughly? This feels like a job for low-level, isolated tests.
    • Do I have tests that check whether all my components fit together properly? Could some integrated tests do this, or are functional tests enough?
  • Clean, maintainable code
    • Are my tests giving me the confidence to refactor my code, fearlessly and frequently?
    • Are my tests helping me to drive out a good design? If I have a lot of integrated tests and few isolated tests, are there any parts of my application where putting in the effort to write more isolated tests would give me better feedback about my design?
  • Productive Workflow
    • Are my feedback cycles as fast as I would like them? When do I get warned about bugs, and is there any practical way to make that happen sooner?
    • If I have a lot of high-level, functional tests, that take a long time to run, and I have to wait overnight to get feedback about accidental regressions, is there some way I could write some faster tests, integrated tests perhaps, that would get me feedback quicker?
    • Can I run a subset of the full test suite when I need to?
    • Am I spending too much time waiting for tests to run, and thus less time in a productive flow state?

Fixtures

  • Text fixtures are test data that needs to be set up as a precondition before a test is run. Often this means populating the database with some information, but that isn't an exclusive case.
  • Avoid JSON fixtures, as they're painful to manage when your database schema changes. Use the ORM, or a tool like factory_boy.
  • When you need to use fixtures, use in-memory database (SQLite3):
    • In settings.py:
    if 'test' in sys.argv:
        # test with in-memory sqlite:
        DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}
  • Fixtures also have to work remotely. Django management commands might be a solution (be careful to not mess with production data!)

Coverage

  • Coverage.py is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not.
  • Coverage measurement is typically used to gauge the effectiveness of tests. It can show which parts of your code are being exercised by tests, and which are not.
  • Django can be easily integrated with coverage.py, a tool for measuring code coverage of Python programs. First, install coverage.py. Next, run the following from your project folder containing manage.py: coverage run --source='.' manage.py test myapp
  • This runs your tests and collects coverage data of the executed files in your project. You can see a report of this data by typing following command: coverage report
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment