Skip to content

Instantly share code, notes, and snippets.

@chebizarro
Last active January 11, 2016 15:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chebizarro/6c949fafe1e9a2c59b26 to your computer and use it in GitHub Desktop.
Save chebizarro/6c949fafe1e9a2c59b26 to your computer and use it in GitHub Desktop.
Vala Testing Framework input

Framework Input

Gherkin

Gherkin is a language for structuring human language in a way that allows business analysts, developers and testers to define features of an application.

Some computer languages have tools available to developers to convert Gherkin to an outline of a computer program. The following meaningless

example shows the structure. This uses PHP's Behat to convert Gherkin to PHP:

Feature: test

Background: given this is a test # features/test.feature:3

Scenario: testing of app # features/test.feature:5 When i run something Then it passes

1 scenario (1 undefined) 2 steps (2 undefined) 0m0.17s (9.41Mb)

--- FeatureContext has missing steps. Define them with these snippets:

/**

  • @When i run something */ public function iRunSomething() { throw new PendingException(); }

/**

  • @Then it passes */ public function itPasses() { throw new PendingException(); }

It is the feature context that forms the basis of generating automated acceptance tests from the features specified in Gherkin. The developer then fills in the gaps with code that drives the tests. In PHP web development this is a tool like Mink that can drive various headless web browsers.

Automatic code generation for Vala

I can think of two approaches to generation code. I have tried neither.

The first is to use libvala to generate a Vala AST then output the AST. Potentially libvala could be modified to generate a Genie version of the AST. This appeals to me.

The other approach would be similar to Valadoc: https://git.gnome.org/browse/valadoc/tree/src/libvaladoc/api/formalparameter.vala#n131

Code generation could also be useful for anyone wanting to develop a tool similar to RSpec. So a common approach using libvala may be helpful. I think Anjuta CTags also uses libvala for autosuggestion of function names etc. See: https://github.com/GNOME/anjuta/blob/master/plugins/symbol-db/anjuta-tags/ctags-visitor.vala

Acceptance Testing Drivers

This is probably the hardest part given the wide range of interfaces available.

Gherkin is from Cucumber written in Ruby with web application development in mind. So I think most tools there use web interfaces. In the Vala world this could be done with libsoup for a text analysis of the web interface, but also embedding Webkit or Gecko which also allows Javascript to be tested.

Vala is often used for desktop GUI development. So Linux Desktop Testing Project ( http://ldtp.freedesktop.org/wiki/ ) using Assistive Technology Service Provider Interface (https://en.wikipedia.org/wiki/Assistive_Technology_Service_Provider_Interface) may be relevant.

Of course software is also developed for technical users. So there are potentially command line interfaces, D-Bus interfaces, shared library interfaces and so on to cater for.

For command line interfaces I'm starting to think GLib's trap_subprocess may be useful: http://valadoc.org/#!api=glib-2.0/GLib.Test.trap_subprocess

I'm trying to write functional tests for Genie, but some features need to stop the compilation process. e.g. attempting to override a protected method in a class. The test should trap the error from valac and make sure it matches the expected error.

Test Output

This is probably moving away from specification by example with Gherkin and moving towards unit testing. Specifically it's about my experience with GLib testing framework reports.

Each test can output the results as TAP (Test Anything Protocol) by using the --tap switch. I would recommend TAP because it is processed more easily by a lot of tools.e.g. Jenkins build server.

In the past I have done unit tests using a script such as:

#/bin/sh

tests="arrays_multiline variables_declarations variables_type_inference"

Build test binaries

for test in $tests do valac $test.gs done

Run test binaries

gtester --keep-going -o results.xml $tests

Fix missing info section from results - see https://bugzilla.gnome.org/show_bug.cgi?id=668035

info="\n\nUnknown\nUnknown\n\n" sed -i 's||'$info'|g' results.xml

Generate HTML report

gtester-report results.xml > report.html

I have not figured out a convenient way to get the output as TAP, although a script for automake is available: https://git.gnome.org/browse/glib/tree/tap-driver.sh

Unit Test of Binaries

Vala and Genie produce binaries of course.

So far I have produced a test binary that is a Position Independent Executable (https://securityblog.redhat.com/2012/11/28/position-independent-executables-pie/) that has all the symbols in the dynamic table. This allows each unit test to treat the test binary as a shared object to it can test individual functions, but also be run as an executable for functional testing.

I haven't figured out how to do this for a production binary.

This would probably a static build so the production binary has the unit tests aggregated after it in memory and the linker should use the symbol table and not the dynamic symbol table.

For a test binary you may also want to include the --debug flag in Valac to include Vala filenames. I think this was tried for code coverage reports with Valum - see https://coveralls.io/builds/4147345 for example. Not sure of the results.

Vala Bugs to be Aware Off

May be of interest: https://bugzilla.gnome.org/show_bug.cgi?id=704072 https://bugzilla.gnome.org/show_bug.cgi?id=597999 https://bugzilla.gnome.org/show_bug.cgi?id=739725

Property based testing with QuickCheck

https://en.wikipedia.org/wiki/QuickCheck

Java implementation - https://github.com/pholser/junit-quickcheck/ (MIT Licence)

Other Unit Testing frameworks for comparison

There is the XUnit (as opposed to xUnit) common architecure which stipulates that an "xUnit" framework should implement the following components:

Test runner

A test runner is an executable program that runs tests implemented using an xUnit framework and reports the test results.[2]

Test case

A test case is the most elemental class. All unit tests are inherited from here.

Test fixtures

A test fixture (also known as a test context) is the set of preconditions or state needed to run a test. The developer should set up a known good state before the tests, and return to the original state after the tests.

Test suites

A test suite is a set of tests that all share the same fixture. The order of the tests shouldn't matter.

Test execution

The execution of an individual unit test proceeds as follows:

setup(); /* First, we should prepare our 'world' to make an isolated environment for testing / ... / Body of test - Here we make all the tests / ... teardown(); / At the end, whether we succeed or fail, we should clean up our 'world' to not disturb other tests or code */ The setup() and teardown() methods serve to initialize and clean up test fixtures.

Test result formatter

A test runner produces results in one or more output formats. In addition to a plain, human-readable format, there is often a test result formatter that produces XML output. The XML test result format originated with JUnit but is also used by some other xUnit testing frameworks, for instance build tools such as Jenkins and Atlassian Bamboo.

Assertions

An assertion is a function or macro that verifies the behavior (or the state) of the unit under test. Usually an assertion expresses a logical condition that is true for results expected in a correctly running system under test (SUT). Failure of an assertion typically throws an exception, aborting the execution of the current test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment