Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Phoronix Test Suite Cheat Sheet

Table of Contents

Phoronix-Test-Suite

Installing

$ brew install phoronix-test-suite

I used this command because I have homebrew installed on my mac (I'm sure this could be done with macports too?). This downloaded version 5.0.1 (although 5.2 is already out and 5.4 is in development).

Adding A Shortcut

I found out that everytime you run a command, you have to prefix it with "phoronix-test-suite":

$ phoronix-test-suite list-tests 

So, yesterday, I added an alias to the .bashrc file (.bash_profile for my mac) so now everytime we want to say "phoronix-test-suite" we just type "p" and it does the same thing:

echo "alias p='phoronix-test-suite'" >> ~/.bashrc

$ p list-tests 

Process

  • test installation
  • test execution
  • reporting

Features

  • Can collect data on
    • system power consuption
    • disk usage
    • software/hardware sensors
  • Can run new tests whenever a git commit has been pushed

FAQ

(Questions which I frequently ask myself)

  • Q: What's the difference between tests and suites?
    • A: Suites are made up of multiple tests.
  • Q: What's the difference between testing and benchmarking?
    • A: Not a blooming clue. (Does it "benchmark" where you are in relation to other hardware the tests might be running on?)

Running Tests/Suites

###The Hard Way Install test/suite you want to run (this also installs dependencies so the p install-dependencies command is basically worthless?):

$ p install chess

Run the test:

$ p run chess
...
Results Uploaded To: http://openbenchmarking.org/result/1406243-PL-CHESSRESU04

If you only type $ p run chess without having the dependencies installed, Phoronix will install the dependencies for you, and then exit the process. Then, simply type $ p run chess again to actually run the test.

###The Easy Way

To install and run:

$ p benchmark chess
...
Results Uploaded To: http://openbenchmarking.org/result/1406243-PL-CHESSRESU04

Removing Tests

$ p remove-installed-test apitest

Note: You cannot remove entire suites with one command: you must uninstall each individual test.

Batch Mode

According to what I've read online, this seems like a way of running a suite so that now questions can be asked in the terminal during execution.

After configuring batch-setup, you can use p batch-benchmark instead of benchmark to use batch mode, and batch-run instead of run.

$ p batch-setup

These are the default configuration options for when running the Phoronix Test Suite in a batch mode (i.e. running phoronix-test-suite batch-benchmark universe). Running in a batch mode is designed to be as autonomous as possible, except for where you'd like any end-user interaction.

    Save test results when in batch mode (Y/n): Y
    Open the web browser automatically when in batch mode (y/N): N
    Auto upload the results to OpenBenchmarking.org (Y/n): n
    Prompt for test identifier (Y/n): Y
    Prompt for test description (Y/n): Y
    Prompt for saved results file-name (Y/n): n
    Run all test options (Y/n): Y

Batch settings saved.

These settings are then written to the user configuration file at ~/.phoronix-test-suite/user-config.xml

Since you won't be prompted for test options, saying yes to "run all test options" would make the process considerably longer, I presume, since then you're doing every single one, instead of just the one the user would have said to do, if prompted (ex: in cyclictest there you would then run 4 tests instead of just 1).

Also, the "prompt for test identifier = n" is where ugly test names come from, since it configures the test to be the timestamp of when you start running it, instead of calling it "javaresults" and "openssl-name" as I used to do. These end up being the directory names within the ``~/.phoronix-test-suite/test-results``` directory.

Testing System Processes

First, download php-pcntl (must be installed for the system_monitor module)

Then run a test, monitoring all sensors which are supported by your system (to find out what is supported, run p system-sensors)

MONITOR=all p benchmark c-ray

- or, for specific sensors

- ```
MONITOR=cpu.temp,cpu.voltage p benchmark c-ray

For Tests Compatible with Your OS

(and the ones most popular within your OS)

p list-recommended-tests

To Save Results to PDF

  1. Install fpdf
  2. Convert results file to pdf:
    • Ex: if results are saved under ~/.phoronix-test-suite/test-results/mytest, run

      p result-file-to-pdf mytest
      

Overriding Standard Deviation

Find the line in ~/.phoronix-test-suite/user-config.xml which states

<DynamicRunCount>TRUE</DynamicRunCount>

and change it to

<DynamicRunCount>FALSE</DynamicRunCount>

According to https://github.com/phoronix-test-suite/phoronix-test-suite/blob/master/pts-core/static/xsl/pts-user-config-viewer.xsl

If this option is set to TRUE, the Phoronix Test Suite will automatically increase the number of times a test is to be run if the standard deviation of the test results exceeds a predefined threshold. This option is set to TRUE by default and is designed to ensure the statistical signifiance of the test results. The run count will increase until the standard deviation falls below the threshold or when the total number of run counts exceeds twice the amount that is set to run by default from the given test profile. Under certain conditions the run count may also increase further.

Viewing Local Files

All files relating to your phoronix-test-suite are stored locally in ~/.phoronix-test-suite.

Directories include:

  • installed-tests
  • test-profiles
  • test-results (This contains the summary xml file, as well as SVGs and other useful data)
  • test-suites

Configuration files can be found in this folder as well.

After a benchmark, responding "Y" to "Do you want to view the results in your webbrowser" will take me to ~/.phoronix-test-suite/test-results/javaresults/composite.xml (which appears blank but will show xml data if you "view source" in chrome)

$ p benchmark java
...
    Do you want to view the results in your web browser (Y/n): Y
    Would you like to upload the results to OpenBenchmarking.org (Y/n): Y
    Would you like to attach the system logs (lspci, dmesg, lsusb, etc) to the test result (Y/n): Y

Results Uploaded To: http://openbenchmarking.org/result/1406246-PL-JAVARESUL09
    Do you want to launch OpenBenchmarking.org (Y/n): Y

Creating Your Own Test

Test profiles are written with XML and shell scripts.

Comparing Results

Running this would find the phoronix test suite with the openbenchmarking.org ID of 1406243-PL-MACBOOKPR84 and run the suite on your hardware, then create a side-by-side comparison of the existing data and your newly-gathered data.

$ p benchmark 1406243-PL-MACBOOKPR84

This suite has 6 tests, but only 4 of them are supported, so I will run 2 of them.

Phoronix Test Suite v5.0.1

    Not Supported: pts/iozone-1.8.0
    Not Supported: pts/gputest-1.3.0
    Installed: pts/stream-1.2.0
    Not Supported: pts/network-loopback-1.0.1
    Installed: pts/c-ray-1.1.0
    Not Supported: pts/apache-1.6.1
    

Stream failed to properly run, so only results for the "c-ray" test can be compared. Comparison results available at http://openbenchmarking.org/result/1406244-PL-1406243PL78. You'll see that it ran in 130 seconds for my system, but 133 for the existing system.

Getting All Possible Commands

$ p

TEST INSTALLATION

   install [Test | Suite | OpenBenchmarking.org ID | Test Result]  ...
   install-dependencies [Test | Suite | OpenBenchmarking.org ID | Test Result]  ...
   make-download-cache
   remove-installed-test [Test]

TESTING

   auto-compare
   benchmark [Test | Suite | OpenBenchmarking.org ID | Test Result]  ...
   
//and so on

Getting All Possible Tests/Suites

(the third column shows you the part of the system it tests)

Tests:

$ p list-tests

pts/apitest                    - APITest                             Graphics
pts/blake2                     - BLAKE2                              Processor

//and so on

Suites

$ p list-available-suites

Phoronix Test Suite v5.0.1
Available Suites

* pts/chess                        - Chess Test Suite                 Processor
* pts/compilation                  - Timed Code Compilation           Processor
* pts/compiler                     - Compiler                         Processor

///and so on

Getting Info on Tests/Suites

Test: Useful: shows download size and run time and whether it's installed

$ p info apitest


Phoronix Test Suite v5.0.1
APITest 2014-06-01

Run Identifier: pts/apitest-1.0.2
Profile Version: 1.0.2
Maintainer: Michael Larabel
Test Type: Graphics
Software Type: Utility
License Type: Free
Test Status: Verified
Project Web-Site: http://github.com/nvMcJohn/apitest
Estimated Run-Time: 1239 Seconds
Download Size: 22.7 MB
Environment Size: 225 MB

Description: APITest is a micro-benchmark developed by John McDonald of OpenGL 4 functionality.

Test Installed: No

Software Dependencies:
- Compiler / Development Libraries

For a suite, such as this one, the last section shows the tests that are in the suite. Useful: shows number of tests and tests.

$ p info chess

Phoronix Test Suite v5.0.1
Chess Test Suite

Run Identifier: pts/chess-1.0.0
Suite Version: 1.0.0
Maintainer: Michael Larabel
Suite Type: Processor
Unique Tests: 2
Suite Description: This test suite contains tests that are various benchmarks looking at the CPU's performance through different AI algorithms for a game of chess.

pts/chess-1.0.0
  * pts/crafty
  * pts/tscp

Running Tests (What Do the Prompts Mean?)

Enter a name to save these results under: monitored_cray

This shows up as the title of the directory in ~/.phoronix-test-suite/test-results/monitoredcray (non-alphanumerics are automatically removed)

It also shows up as the title of the test when results are uploaded: http://openbenchmarking.org/result/1406307-KH-MONITORED39

Enter a unique name to describe this test run / configuration: cray-monitor

This shows up as the name next to the blue bar (ex: scroll to bottom of http://openbenchmarking.org/result/1406307-KH-MONITORED39), to be used to identify this run when comparing multiple different runs (e.g. if you ran p benchmark [id] to see results side-by-side; see Comparing Results)

If you save a new test under a previous name, but you use a unique name here, then both tests will show up under the same composite.xml file, but the different run names (these unique names) will be labeled differently next to the blue bars.

Running Suites (What Do the Prompts Mean?)

What does it all mean? Here's an example.

I have just begun to run a suite of 2 tests: stream and c-ray. stream will run 10 trials and take a total of 2 minutes. On start of this command, there are still 13 minutes left until the suite has been downloaded.

Then, the next test begins to be downloaded:c-ray. It consists of 3 trials and will take 11 minutes. Since it is the last test, there are also 11 minutes until completion of the suite. The data that is being measured in this test is the amount of seconds to execute, so each trial records the amount of seconds taken, and the average is calculated.

Stream 2013-01-17:
    pts/stream-1.2.0 [Type: Copy]
    Test 1 of 2
    Estimated Trial Run Count:    10
    Estimated Test Run-Time:      2 Minutes
    Estimated Time To Completion: 13 Minutes
        Started Run 1 @ 17:27:32
        The test exited with a non-zero exit status.
        ...
        Started Run 10 @ 17:27:50
        The test exited with a non-zero exit status.

    Test Results:

    Average: 0 MB/s
    This test failed to run properly.


C-Ray 1.1:
    pts/c-ray-1.1.0 [Total Time]
    Test 2 of 2
    Estimated Trial Run Count:    3
    Estimated Time To Completion: 11 Minutes
        Started Run 1 @ 17:27:57
        Started Run 2 @ 17:30:09
        Started Run 3 @ 17:32:21  [Std. Dev: 1.38%]

    Test Results:
        130.06
        128.656
        132.227

    Average: 130.31 Seconds


If Upload Fails

(e.g. no internet connection at time of test completion)

p upload-result mytest

if "mytest" is the name of the test run as saved in the ~/.phoronix-test-suite/test-results directory:

Finishing an Incomplete Test

$ p finish-run ???

This seems useful, but I can't figure out what to supply: I try supplying the active.xml files, but it says that $ p finish-run active.xml and $ p finish-run active and $ p finish-run java2 are incorrect syntax.

When I supply the name of a completed run, however, like $ p finish-run javaresults, it says It appears that there are no incomplete test results within this saved file. I'm super confused.

Getting Support

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment