Time-based comparisons of WebPageTest results
WebPageTest result data is not ideal for making historical comparisons. Typically you are dealing with small sample sizes that are far more prone to skewing from natural network variation than RUM data is.
Nonetheless, sometimes you want to make comparisons between synthetic results from vastly different points in time. To better draw those comparisons, I’ve modified webpagetest-mapper to express performance metrics as a function of round-trip time. By doing this, the numbers are reported within the context of network performance at the given point, lending the comparisons some more legitimacy.
How to run the comparison
First you need to install webpagetest-mapper. It runs on node.js so, if you have that set-up, run the following command:
Before you can
generate the comparison report,
you need to get
your result ids
from WebPageTest.
The result id
is a short, alphanumeric, underscore-separated string,
available from the
test history
section of WebPageTest.
It is also the
path component
that appears after /results/
in the result URL.
For my example, I’ve chosen result ids for a Nature article, tested recently and a year ago:
When you have
the result ids,
you can pass
as many as you like
to webpagetest-mapper
as a comma-separated list
using the -i
option:
This command will write
an HTML document
that compares
the relative performance
of the tests,
to the file specified
by the -o
option.
Tidying the output
If you open the output file in a browser, you’ll see some tables and charts that highlight various aspects of the result data. There are a few manual tweaks that can be made to this document, which better convey the information. Open the file in your editor of choice and make the following changes:
-
I find it helpful to distinguish between historical and recent tests using colour. For each row or bar of historical data, replace the class
table-row-home
withtable-row-away
and changechart-bar-home
tochart-bar-away
. This will make the old results stand out in blue while the recent ones remain red. Of course, you can also edit the CSS to use entirely different colours if you like. -
By default, tests are identified by their label, or by the result id if there is no label. To make the report more user-friendly, do a global search-and-replace and change them to something more meaningful.
-
It’s likely that some of the charts or tables will be superfluous to whatever point you wish to make in your analysis. Keep the report focused by deleting any that are not interesting.
Save these changes and refresh the page. The result should be much clearer charts, like the following:
At this point, you may want to go even further and write some analysis copy around the generated content. At Nature, we generate these reports every quarter and circulate them within the department. We’ve found them useful for bringing wider awareness to web performance issues that might otherwise be ignored.
Find this post useful, or want to discuss some of the topics?