Response API Demo App

Are you looking to develop your own application on top of the Bazaarvoice Response API? Well, we got something for you. The Response API Demo App is a simple Node-React application which demonstrates how to use Response API in conjunction with our 3-legged OAuth2 API. It is recommended to go through the Developer Portal and read about OAuth2 API and the Response API before diving into the application architecture below.

This application was bootstrapped with Create React App and consists of two separate components – the front-end client side in React and a back-end server side in NodeJS.

Let’s talk more about the back-end architecture first. Almost all of the server-side logic is contained in server.js. Using the Express framework for NodeJS, this file defines the following endpoints:

  • /api/redirect – This endpoint is supposed to handle any incoming redirections from OAuth2, obtain an access token using either an authorization code (if it is a first time login) or a refresh token (if the existing access token has expired or is expiring soon). If you are building an app integrated with OAuth2, you must define an endpoint to handle redirects in a similar way.
  • /api/check-login – This is an application specific endpoint which was defined so that the client side can easily verify using the server if a user is logged in or not so that they can then be taken to the login page.
  • Abstraction of GET, POST, PATCH, DELETE calls to the Response API –  These endpoints are basically a clone of the endpoints provided by the Response API which are documented here on the Developer Portal. This abstraction is done so that the client side can call these secured endpoints in-directly without having to expose any of the OAuth2 credentials through the user’s browser. All our logic that relates to obtaining and refreshing OAuth tokens stays on the server side securely.

This application uses express-session for storing and maintaining user sessions using browser cookies which is important for being able to maintain multiple users without storing state on your backend. However, this implementation of user sessions is not suitable for production applications and you should probably be maintaining user sessions using a combination of cookies and session storage.

The server side credentials and configurations are supposed to be stored in server/server-config.json which is then picked up by the Node server. Note that these credentials are confidential and should not be exposed to the client side in any way.

Coming to the front-end architecture, we are using the React library and React Semantic UI for the UI Components. The client side does a bunch of stuff ranging from querying a review from the Conversations API and then using that Review ID to obtain/add/modify/delete the corresponding responses to that review using the Response API. Following components make up the core of the front-end:

  • Login Page – If a user is not logged in (we check this using the /api/check-login endpoint as discussed above), we redirect them to this page which has a little widget that allows them to go to a Bazaarvoice login page, enter their Bazaarvoice Portal login credentials and once they are properly authenticated, the OAuth2 API redirects them to the /api/redirect endpoint which then handles all the token exchange logic. You can configure this redirect URI to be whatever you want during your app provisioning process but for the demo application to work as expected, it should be http://localhost:5000/api/redirect as you can see in the config file. All of the OAuth2 login and token exchange process happens during this step.
  • Search Page – This page essentially serves as the landing page for this application where a user can enter a Review ID (this can be obtained using Workbench or the Conversations API) and then they are taken to the review page.

  • Review Page – Once the user is authenticated and they search for a valid Review ID, they are taken to this page which fetches the specified review by calling the Conversations API and then fetches all responses for that review by calling the application’s backend which in turn calls the Response API. So, on this page, a user can see contents of a review along with all responses. This also allows them to add a new response, edit an existing one or delete a response.

You might have noticed a department field on a response which appears as a drop-down menu. These menu options have been hard-coded in the demo application and can be modified in the departmentFormOptions.js file. Further, client.js is an utility file that presents all the commonly used API calls as simple functions accessible to all components on the client side.

The client side configurations are stored in response-demo/client/src/utils/config.js file. This is built into the the project when the front-end client is packaged before running. These configurations are accessible to the browser so you shouldn’t store anything confidential here.

Apart from that, the application follows a pretty standard architecture and you can read more about it here. On the deployment side of things, there is a Dockerfile which builds the client artifacts, sets up the server and gets the application up and running on your local environment. Follow these instructions to get it up and running locally. The app can also be deployed to Flynn by making changes to the redirect URI in configurations and also making sure that your Flynn Redirect URI is added to the list of allowed Redirect URIs for your app credentials.

Vger Lets You Boldly Go . . .

Are you working on an agile team? Odds are high that you probably are. Whether you do Scrum/Kanban/lean/extreme, you are all about getting work done with the least resistance possible. Heck, if you are still on Waterfall, you care about that.  But how well are you doing? Do you know? Is that something a developer or a lead should even worry about or is a SEP? That’s a trick question. If your team is being held accountable and there is a gap between their expectations and your delivery, by the transitive property, you should worry about some basic lean metrics.

Here at Bazaarvoice, we are agile and overwhelmingly leverage kanban. Kanban emphasizes the disciplines of flow and continuous improvement. In an effort to make data-driven decisions about our improvements, we needed an easy way to get the relevant data. With just JIRA and GitHub alone, access to the right data has a significant barrier to entry.

So, like any enterprising group of engineers, we built an app for that.

What we did

Some of us had recently gone through an excellent lean metric forecasting workshop with Troy Magennis from Focused Objective. In his training he presented the idea of displaying a quadrant of lean metrics in order to force a narrative for a teams behavior, and to avoid overdriving on a single metric. This really resonated with me and seemed like good paradigm for the app we wanted to build.

And thus, Vger was born.

We went with a simple quadrant view with very bookmarkable url parameters. We made it simple to for teams to self-service by giving them an interface to make their own “Vger team” and add whatever “Vger boards” they need.  Essentially, if you can make a JQL query and give it a board in JIRA, Vger can graph the metrics for it. In the display, we provide a great deal of flexibility by letting teams configure date ranges for the dashboard, work types to be displayed, and the JIRA board columns to be considered as working/non-working.

Now the barrier to entry for lean metrics is down to “can you open a browser.”  Not too shabby.

The Quadrant View

We show the following in the quadrant view:

1. Throughput – The number of completed tickets per week.

2. Variation – the variation (standard deviation/mean) for the Throughput.

3. Backlog Growth – the tickets opened versus closed.

4. Lead Times – The lead times for the completed tickets. This also provides a detailed view by Jira board column to see where you spend most of your time.

We at Bazaarvoice are conservative gamblers, so you’ll see the throughput and lead time quadrants show the 50%, 80%, and 90% likelihood (the inverse of percentile).  We do this because relying on the average or mean is not your friend. Who want’s to bet on a coin toss? Not us. We like to be right say, eight out of ten times.

The Quarterly View

Later, we were asked to show throughput by quarter to help with quarterly goal planning. We created a side-car page for this.  It shows Throughput by quarter:

We also built a scatterplot for lead times so outliers could be investigated:

This view has zoomable regions and each point lets you click through to the corresponding JIRA ticket. So that’s nice.

But Wait! Git This….

From day one, we chose to show the same Quadrant for GitHub Pull Requests.

Note that we show rejected and merged lines in the PR Volume quadrant.  We also support overlaying your git tags on both PR and JIRA ticket data.  Pretty sweet!

I Want to Do More

Vger lets you download throughput data from the Quadrant and Quarterly views. You can also download lead time from the Quarterly view too. This lets teams and individuals perform their own visualizations and investigations on these very useful lean metrics.

But Why?

Vger was built with three use cases in mind:

Teams should be informed in retros

Teams should have easy access to these key lean metrics in their retros. We recommend that they start off viewing the quadrant and seeing if they agree with the narrative the retro facilitator presents. They should also consider the results of any improvement experiments they tried. Did the new behavior make throughput go up as they hoped it would? It the new behavior reduce time spent in code review? Did it reduce the number open bugs? etc.  Certainly not everything in a retro should be mercilessly data-driven, but it is a key element to a culture of continuous improvement.

Managers should know this data and speak to it

Team managers commonly speak to how their teams are progressing. These discussions should be data-driven, and most importantly it should be driven by the same data the team has access to (and hopefully retros to). It should also be presented in a common format that still provides for some customization. NOTE: You should avoid comparing team to team in Vger or a similar visualization. In most situations, that way leads to futility, confusion, and frustration.

We should have data to drive data-driven decisions about the future

Lean forecasting is beyond the scope of this post however, Troy Magennis has a fine take on it.  My short two cents on the matter is: a reasonably functioning team with even a little bit of run time should never be asked “how long will it take?”  Drop that low value ritual and do the high value task of decomposing the work, then forecast with historical data. Conveniently, you can download this historical data from Vger you used in your spreadsheet of choice.  I happen to like monte carlo simulations myself.

Isn’t This for Kanban?

You’ll note I used the term “lean metrics” throughout. I wanted to avoid any knee-jerk “kanban vs scrum vs ‘how we do things'”reaction. These metrics apply no matter what methodology you consciously (or unconsciously) use for the flow of work through your team.  It was built for feature development teams in mind, but we had good success when our client implementation team started using it as an early adopter. It allowed them to have a clear view into their lead time details and ferret out how much time was really spent waiting on clients to perform an action on their end.

Cool. How Do I Get a Vger?

We open sourced it here, so help yourself. This presented as “it worked for us”-ware and is not as finely polished as it could be, so it has some caveats. It is a very simple serverless app. We use JIRA and GitHub, so only those tools are currently supported. If you use similar, give Vger a try!

What’s Next?

If your fingers are itching to contribute, here’s some ideas:

  • Vger’s ETL process could really use an update
  • The Quadrant view UI really needs an update to React to match the Quarterly view
  • Make it flexible for your chosen issue tracker or source control?
  • How about adding a nice Cumulative Flow Diagram?

 

Still Looking Good While Testing: Automated Testing With a Visual Regression Service Part II

If you’ve followed our blog regularly, you’ve probably read our post on using visual regression testing tools and services to better test your applications’ front end look and feel.  If not, take a few minutes to read through our previous post on this topic.

Now that you’re up to speed, let’s take what we did in our previous post to the next level.

In this post, we’re going to show how you can alter your test configuration and code to test mobile browsers through a given testing service (browserstack) as well as get better reporting out of our tests natively and through options available to us in Jenkins.

Adjusting the Fuzz Factor:

If you run your test multiple times across several browsers with the visual regression service  using default settings, you may notice a handful of exception images cropping up in the diff directory which look nearly identical.

browser-fight

Browser fight!!!

 

 

 

 

 

 

 

If you happen to be testing across several browsers (using Browserstack or a similar service) which potentially have these browsers hosted in real or virtual environments with widely differing screen resolutions, this may impact how the visual-regression service performs its image diff comparison.

To alleviate this, either ensure your various browser hosts are able to display the browser in the same resolution or add a slight increase in your test’s mismatch tolerance.

As mentioned in our previous post, you can do this by editing the values under the ‘visual regression service’ object in your project’s wdio.conf file.  For example:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.25,
  }),

Setting the mismatch tolerance value to 0.25 in the above snipped would allow the regression service a 25% margin of error when checking screen shots against any reference images it has captured.

Better Test Results:

Also mentioned in the previous post, one of the drawbacks to given examples of using the visual-regression service in our tests is that there is little feedback being returned other than the output of our image comparison.

head-in-sand

Having fun with that lack of feedback?

 

 

 

 

 

However, with a bit of extra code, we can make usable assertion statements in our test executions once a screen comparison event is generated.

The key is the checkElement() method which is really a WebdriverIO method that is enhanced by the visual-regression service.  This method returns an object that contains some meta data about the check being requested for the provided argument.

We can assign the method call to a new variable, which, once we ‘deserialize’ the object into something readable (e.g. a JSON string) we can then leverage Chai or some other framework to use assertions to make our tests more descriptive.

Here’s an example:

it('should test some things visually', () => {
  ...
  let returnedContents = JSON.stringify(browser.checkElement('<my web element>');
  console.log(returnedContents);
  assert.includes('<deserialized text to assert for', returnedContents);
});

In the above code snippet, near the end of the test, we are calling ‘checkElement()’ to do a visual comparison of the contents of the given web element selector, then converting the object returned by ‘checkElement()’ to a string.  Afterward, we are asserting there is some text/string content within the stringified object our comparison returned.

In the case of the text assertion, we would want to assert a successful match message is contained within the returned object.  This is because the ‘checkElement()’ method, while it may return data that indicates a test failure, on its own will not trigger an exception that would appropriately fail our test should an image comparison mismatch occur.

Adding Mobile

going-mobile

Oooh – someone downloaded the pancake app!

 

 

 

 

 

Combining WebdriverIO’s framework along with the visual-regression service and a browser test service like Browserstack, we can create tests that run against real browsers.  To do this, we will need to make some changes to our WebdriverIO config.  Try the following:

  1. Make a copy of your wdio.conf.js
  2. Name the copy wdio.mobile.conf.js
  3. Edit your package.json file
  4. Copy the ‘test’ key/value pair under ‘scripts’ and past it to a new line, rename it ‘test:mobile’
  5. Point the test:mobile script to the wdio.mobile.conf.js config file and save your changes

Next, we need to edit the contents of the wdio.mobile.conf.js script to run tests only against mobile devices.  The reason for adding a whole, new test config and script declaration is that due to the way mobile devices behave, there are some settings to declare for mobile browser testing with WebdriverIO and Browserstack which are incompatible with running tests against desktop browsers.

Edit the code block at the top of the mobile config file, changing it to the following:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');
function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserOrientation = context.meta.orientation;
    return path.join(basePath,
    `${testName}_${browserName}_v${browserVersion}_${browserOrientation}_mobile.png`);
  };
}

Note that we’ve removed the declarations for height and width dimensions.  As Browserstack allows us to test on actual mobile devices, defining viewport constraints are not only unnecessary, but it will result in our tests failing to execute (the height and width dimension object can’t be passed to webdriver as part of a configuration for a real, mobile device).

Next, update the visual-regression service’s orientation configuration near the bottom of the mobile config files to the following:

orientations: ['portrait', 'landscape'],

Since applications using responsive design can sometime break when moving from one orientation to another on mobile devices, we’ll want to run out tests in both orientation modes.

Stating this behavior in our configuration above will trigger our tests to automatically switch orientations and retest.

Finally, we’ll need to update our browser capability settings in this config file:

capabilities: [
{
  device: 'iPhone 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPhone 8',
},
{
  device: 'iPad 6th',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - iPad',
},
{
  device: 'Samsung Galaxy S9',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Galaxy S9',
},
{
  device: 'Samsung Galaxy Note 8',
  'browserstack.local': false,
  'realMobile': true,
  project: 'My project - Note 8',
},
]

In the code above, the ‘realMobile’ descriptor is necessary to run tests against modern mobile devices through Browserstack.  For more information on this, see Browserstack’s documentation.

Once your change is saved, try running your tests on mobile by doing:

npm run test:mobile

Examine the image outputs of your tests and compare them to the results from your desktop test run.  Now you should be able to run tests for your app’s UI across a wide range of browsers (mobile and desktop).

Taking Things to Jenkins

You could include the project we’ve put together so far into the codebase of your app (just copy the dev dependencies from your package.json your specs and configs into the existing project).

Another option is to use this as a standalone testing tool to apply some base screen-shot based verification of your app.

In the following example, we’ll go through the steps you can to set up a project like this as a resource to use in Jenkins to test a given application and send the results to members of your team.

Consider the value of being able to generate a series of targeted screen shots of elements of your app per build and send them to team members like a product manager or UX designer.  This isn’t necessarily ‘heavy lifting’ from a dev ops standpoint but automating any sort of feedback for your app will trend toward delivering a better app, faster.

Assuming you have your project hosted in Github, you have administrative access to Jenkins and that your app is hosted in an environment Jenkins can access (your organization’s Amazon S3 bucket, hosted from a Docker image, etc.):

I. Create a new Jenkins job

jenkins-proj-name

 

 

 

 

* Create a new Jenkins task within the same space the job that builds and ships your web app (should one exist).  Assign it a name.

II. Configure the job to pull from your image testing app from Github

jenkins-set-repo

 

 

 

 

 

* Go to the source control configuration portion of the job.  Enter the info for your repository for the test app you’ve built.

III. Set the project to build periodically

jenkins-set-trigger

 

 

 

 

 

 

* Within the build settings of the job, choose the build periodically option and configure your job’s frequency.

Ideally, you would set this job to trigger once the task that builds and ‘ships’ your app completes successfully (thus, executing your tests every time a new version of the app is published).

Alternatively, to run periodically based on a given time, then enter something like ‘* 23 * * *’ into Jenkins’ cron configuration field to set your job to run once every day (in this case, at 11 PM, relative to your server’s configuration).

IV. Build your shell execution script

jenkins-shell-script

 

 

 

 

 

 

 

* Create a build step and for your task and choose the shell script option.  You can embed your npm test script execution here.

V. Collect your artifacts

From the list of post build options, choose the ‘archive artifacts’ option.  In the available text field, enter the path to the generated screen shots you wish to capture.  Note that this path may differ depending on how your Jenkins server is configured.  If you’re having trouble pinpointing the exact path to your artifacts, echo the ‘pwd’ command in your shell script step to have the job list your working directory in the job’s console output then work from there.

jenkins-archive

VI. Sending email notifications

jenkins-email-config

 

 

 

 

 

 

* Lastly, choose from the post build options menu the advanced email notification option.

Enter an email or list of email addresses you wish to contact once this job completes.  Fill out your email settings (subject line, body, etc.) and be sure to enter the path to the screen resources you will wish to attach to a given email.

Save your changes and you are ready to go!

This job will run on a regular basis, execute a given set of UI specific tests and send screen capture information to inquiring team members who would want to know.

You can create more nuanced runs but using Jenkins’ clone feature to copy this job, altering to run your mobile-specific tests, diversifying your test runs.

Further Reading

If you’re looking to dive further into visual regression testing, there are several code examples online worth reviewing.

Additionally, there are other visual regression testing services worth investigating such as Percy.IO, Screener.IO and Fluxguard.

Looking Good While Testing: Automated Testing With a Visual Regression Service

A lot of (virtual) ink has been spilled on this blog about automated testing (no, really). This post is another in a series of dives into different automated testing tools and how you can use them to deliver a better, higher-quality web application.

Here, we’re going to focus on tools and services specific to ‘visual regression testing‘ – specifically cross-browser visual testing of an application front end.

What?

By visual regression testing, we mean regression testing applied specifically to an app’s appearance may have changed across browsers or over time as opposed to functional behavior.

Why?

One of the most common starting points in testing a web app is to simply fire up a given browser, navigate to the app in your testing environment, and note any discrepancy in appearance (“oh look, the login button is upside down. Who committed this!?”).

american_psycho

A spicy take on how to enforce code quality – we won’t be going here.

 

 

 

 

 

 

 

The strength of visual regression testing is that you’re testing the application against a very humanistic set of conditions (how does the application look to the end user versus how it should look). The drawback is that doing this is generally time consuming and tedious.  But that’s what we have automation for!

hamburger_ad_vs_reality

How our burger renders in Chrome vs how it renders in IE 11…

How?

A wise person once said, ‘In software automation, there is no such thing as a method call for “if != ugly, return true”‘.

For the most part – this statement is true. There really isn’t a ‘silver bullet’ for the software automation problem of fully automating testing the appearance of your web application across a given browser support matrix.  At least not with some caveats.

The methods and tools for doing so can run afoul of at least one of the following:

  • They’re Expensive (in terms of time, money or both)
  • They’re Fragile (tests emit false negatives, can be unreliable)
  • They’re Limited (covers only a subset of supported browsers)

faberge_egg

Sure, you can support delicate tools. Just keep in mind the total cost.

 

Tools

We’re going to show how you can quickly set up a set of tools using WebdriverIO, some simple JavaScript test code and the visual-regression-service module to create snapshots of your app front end and perform a test against its look and feel.

Setup

Assuming you already have a web app ready for testing in your choice of environment (hopefully not in production) and that you are familiar with NodeJS, let’s get down to writing our testing solution:

interesting_man_tests_code

This meme is so old, I can’t even…

 

 

 

 

 

 

 

 

1. From the command line, create a new project directory and do the following in order:

  • ‘npm init’ (follow the initialization prompts – feel free to use the defaults and update them later)
  • ‘npm install –save-dev webdriverio’
  • ‘npm install –save-dev wdio-visual-regression-service’
  • ‘npm install –save-dev chai’

2. Once you’ve finished installing your modules, you’ll need to configure your instance of WebdriverIO. You can do this manually by creating the file ‘wdio.conf.js’ and placing it in your project root (refer to the WebdriverIO developers guide on what to include in your configuration file) or you can use the wdio automated configuration script.

3. To quickly configure your tools, kick off the automated configuration script by running ‘npm run wdio’ from your project root directory.  During the configuration process, be sure to select the following (or include this in your wdio.conf.js file if you’re setting things up manually):

  • Under frameworks, be sure to enable Mocha (we’ll use this to handle things like assertions)
  • Under services be sure to enable the following:
    • visual-regression
    • Browserstack (we’ll leverage Browserstack to handle all our browser requests from WebdriverIO)

Note that in this case, we won’t install the selenium standalone service or any local testing binaries like Chromedriver. The purpose of this exercise is to quickly package together some tools with a very small footprint that can handle some high-level regression testing of any given web app front end.

Once you have completed the configuration script, you should have a wdio.conf.js file in your project configured to use WebdriverIO and the visual-regression service.

Next, we need to create a test.

Writing a Test

First, make a directory within your project’s main source called tests/. Within that directory, create a file called homepage.js.

Set the contents of the file to the following:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
});

it('should look as expected', () => {
browser.checkElement('#header');
});
});

That’s it. Within the single test function, we are calling a method from the visual-regression service, ‘checkElement()’. In our code, we are providing the ID, ‘header’ as an argument but you should replace this argument with the ID or CSS selector of a container element on the page you wish to check.

When executed, WebdriverIO will open the root URL path it is provided for our web application then will execute its check element comparison operation. This will generate a series of reference screen shots of the application. The regression service will then generate screen shots of the app in each browser it is configured to test with and provide a delta between these screens and the reference image(s).

A More Complex Test:

You may have a need to articulate part of your web application before you wish to execute your visual regression test. You also may need to execute the checkElement() function multiple times with multiple arguments to fully vet your app front end’s look and feel in this manner.

Fortunately, since we are simply inheriting the visual regression service’s operations through WebdriverIO, we can combine WebriverIO-based method calls within our tests to manipulate and verify our application:

describe('home page', () => {
  beforeEach(function () {
  browser.url('/');
  });

  it('should look as expected', () => {
    browser.waitForVisible('#header');
    browser.checkElement('#header');
  });

  it('should look normal after I click the button, () => {
    browser.waitForVisible('.big_button');
    browser.click('.big_button');
    browser.waitForVisible('#main_content');
    browser.checkElement('#main_content');
  });

  it('should have a footer that looks normal too, () => {
    browser.scroll('#footer');
    browser.checkElement('#footer');
  });
});

Broad vs. Narrow Focus:

One of several factors that can add to the fragility of a visual test like this is attempting to account for minor changes in your visual elements. This can be a lot to bite off and chew at once.

Attempting to check the content of a large and heavily populated container (e.g. the body tag) is likely going to contain so many possible variations across browsers that your test will always throw an exception. Conversely, attempting to narrow your tests’ focus to something very marginal (e.g. selecting a single instance of a single button) may never be touched by code implemented by front end developers and thus, you may be missing crucial changes to your app UI.

flight_search_results

This is waaay too much to test at once.

 

 

 

 

 

 

 

The visual-regression service’s magic is in that it allows you to target testing to specific areas of a given web page or path within your app – based on web selectors that can be parsed by Webdriver.

price_link

And this is too little…

 

 

 

 

Ideally, you should be choosing web selectors with a scope of content that is not too large nor too small but in between. A test that focuses on comparing content of a specific div tag that contains 3-4 widgets will likely deliver much more value than one that focuses on the selector of a single button or a div that contains 30 widgets or assorted web elements.

Alternately, some of your app front end may be generated by templating or scaffolding that never received updates and is siloed away from code that receives frequent changes from your team. In this case, marshalling tests around these aspects may result in a lot of misspent time.

menu_bar

But this is just about right!

 

 

Choose your area of focus accordingly.

Back to the Config at Hand:

Before we run our tests, let’s make a few updates to our config file to make sure we are ready to roll with our initial homepage verification script.

First, we will need to add some helper functions to facilitate screenshot management. At the very top of the config file, add the following code block:

var path = require('path');
var VisualRegressionCompare = require('wdio-visual-regression-service/compare');

function getScreenshotName(basePath) {
  return function(context) {
    var type = context.type;
    var testName = context.test.title;
    var browserVersion = parseInt(context.browser.version, 10);
    var browserName = context.browser.name;
    var browserViewport = context.meta.viewport;
    var browserWidth = browserViewport.width;
    var browserHeight = browserViewport.height;
 
    return path.join(basePath, `${testName}_${browserName}_v${browserVersion}_${browserWidth}x${browserHeight}.png`);
  };
}

This function will be utilized to build the paths for our various screen shots we will be taking during the test.

As stated previously, we are leveraging Browserstack with this example to minimize the amount of code we need to ship (given we would like to pull this project in as a resource in a Jenkins task) while allowing us greater flexibility in which browsers we can test with. To do this, we need to make sure a few changes in our config file are in place.

Note that if you are using a different browser provisioning services (SauceLabs, Webdriver’s grid implementation), see WebdriverIO’s online documentation for how to set you wdio configuration for your respective service here).

Open your wdio.conf.js file and make sure this block of code is present:

user: process.env.BSTACK_USERNAME,
key: process.env.BSTACK_KEY,
host: 'hub.browserstack.com',
port: 80,

This allows us to pass our browser stack authentication information into our wdio script via the command line.

Next, let’s set up which browsers we wish to test with. This is also done within our wdio config file under the ‘capabilities’ object. Here’s an example:

capabilities: [
{
  browserName: 'chrome',
  os: 'Windows',
  project: 'My Project - Chrome',
  'browserstack.local': false,
},
{
  browserName: 'firefox',
  os: 'Windows',
  project: 'My Project - Firefox',
  'browserstack.local': false,
},
{
  browserName: 'internet explorer',
  browser_version: 11,
  project: 'My Project - IE 11',
  'browserstack.local': false,
},
],

Where to Put the Screens:

While we are here, be sure you have set up your config file to specifically point to where you wish to have your screen shots copied to. The visual-regression service will want to know the paths to 4 types of screenshots it will generate and manage:

lots_a_screens

Yup… Too many screens

 

 

 

 

 


References: This directory will contain the reference images the visual-regression service will generate on its initial run. This will be what our subsequent screen shots will be compared against.

Screens: This directory will contain the screen shots generated per browser type/view by tests.

Errors: If a given test fails, an image will be captured of the app at the point of failure and stored here.

Diffs: If there is a given comparison performed by the visual-regression service between an element from a browser execution and the reference images which results in a discrepancy, a ‘heat-map’ image of the difference will be capture and stored here. Consider the content of this directory to be your test exceptions.

Things Get Fuzzy Here:

fozzy

Fuzzy… Not Fozzy

 

 

 

 

 

 

 

 

Finally, before kicking off our tests, we need to enable our visual-regression service instance within our wdio.conf.js file. This is done by adding a block of code to our config file that instructs the service on how to behave. Here is an example of the code block taken from the WebdriverIO developer guide:

visualRegression: {
  compare: new VisualRegressionCompare.LocalCompare({
    referenceName: getScreenshotName(path.join(process.cwd(), 'screenshots/reference')),
    screenshotName: getScreenshotName(path.join(process.cwd(), 'screenshots/screen')),
    diffName: getScreenshotName(path.join(process.cwd(), 'screenshots/diff')),
    misMatchTolerance: 0.20,
  }),
  viewportChangePause: 300,
  viewports: [{ width: 320, height: 480 }, { width: 480, height: 320 }, { width: 1024, height: 768 }],
  orientations: ['portrait'],
},

Place this code block within the ‘services’ object in your file and edit it as needed. Pay attention to the following attributes and adjust them based on your testing needs:

‘viewports’:
This is a JSON object that provides width/height pairs to test the application at. This is very handy if you have an app that has specific responsive design constraints. For each pair, the test will be executed per browser – resizing the browser for each set of dimensions.

‘orientations’: This allows you to configure the tests to execute using portrait and/or landscape view if you happen to be testing in a mobile browser (default orientation is portrait).

‘viewportChangePause’: This value pauses the test in milliseconds at each point the service is instructed to change viewport sizes. You may need to throttle this depending on app performance across browsers.

‘mismatchTolerance’: Arguably the most important setting there. This floating-point value defines the ‘fuzzy factor’ which the service will use to determine at what point a visual difference between references and screen shots should fail. The default value of 0.10 indicates that a diff will be generated if a given screen shot differs, per pixel from the reference by 10% or more. The greater the value the greater the tolerance.

Once you’ve finished modifying your config file, lets execute a test.

Running Your Tests:

Provided your config file is set to point to the root of where your test files are located within the project, edit your package.json file and modify the ‘test’ descriptor in the scripts portion of the file.

Set it to the following:

‘./node_modules/.bin/wdio /wdio.desktop.conf.js’

To run your test, from the command line, do the following:

‘BSTACK_USERNAME= BSTACK_KEY= npm run test — –baseUrl=’

Now, just sit back and wait for the test results to roll in. If this is the first time you are executing these tests, the visual-regression service can fail while trying to capture initial references for various browsers via Browserstack. You may need to increase your test’s global timeout initially on the first run or simply re-run your tests in this case.

Reviewing Results:

If you’re used to your standard JUnit or Jest-style test execution output, you won’t necessarily similar test output here.

If there is a functional error present during a test (an object you are attempting to inspect isn’t available on screen) a standard Webdriver-based exception will be generated. However, outside of that, your tests will pass – even if a discrepancy is visually detected.

However, examine your screen shot folder structure we mentioned earlier. Note the number of files that have been generated. Open a few of them to view what has been capture through IE 11 vs. Chrome while testing through Browserstack.

Note that the files have names appended to them descriptive of the browser and viewport dimensions they correspond to.

example1

Example of a screen shot from a specific browser

 

 

 

Make note if the ‘Diff’ directory has been generated. If so, examine its contents. These are your test results – specifically, your test failures.

example2

Example of a diff’ed image

 

 

 

There are plenty of other options to explore with this basic set of tooling we’ve set up here. However, we’re going to pause here and bask in the awesomeness of being able to perform this level of browser testing with just 5-10 lines of code.

Is there More?

This post really just scratches the surface of what you can do with a set of visual regression test tools.  There are many more options to use these tools such as enabling mobile testing, improving error handling and mating this with your build tools and services.

We hope to cover these topics in a bit more depth in a later post.   For now, if you’re looking for additional reading, feel free to check out a few other related posts on visual regression testing here, here and here.

Auditing your Web App for Accessibility with Lighthouse

If you’ve followed our blog for some time, you’ve likely encountered posts detailing how to engage in various kinds of software testing, from performance to data-driven to security and more.

This post continues that trend with a focus on testing your site for accessibility.

What is Accessibility?

All Access Badge

 

 

 

 

 

If you are unfamiliar with the concept, accessibility, within the realm of web applications is a term that covers a wide range of concerns. The primary focus is codifying and helping to reinforce design standards that ensure the web is accessible to all – regardless of people’s ability to consume and interact with it.

This is a very large topic and its definition is beyond the scope of this blog post’s intent. If you’re looking for a starting point however, our friends at W3C.org can help you with that: https://www.w3.org/standards/webdesign/accessibility

If you want to cut to the chase, there are two major accessibility standards you should be aware of (and are generally considered the path toward compliance). Both of these are supported and covered by the W3C as international standards and can be reviewed in full here:

You can also visit this link for some related coding examples.

Why Accessibility?

Why should everyone be concerned with accessibility design issues?  The short answer can be found from this quite from Sir Tim Berners-Lee, via the W3C:

“The power of the Web is in its universality.
Access by everyone regardless of disability is an essential aspect.”

A more complex answer is that, as the web reaches more and more global ubiquity, we as software creators need to take into account the range of needs to access the web from all users. This is simply fair play as web technology has matured and become evermore essential to our daily lives. There are also legal ramifications for not taking accessibility into account. Here are some examples:

How Accessible is my Site?

If you’ve closely followed the design principles of WCAG 2.0, then your web application front end should be compliant with most modern accessibility software such as screen readers. However it’s difficult to verify your app’s end-user experience in this regard without actually sitting down with a browser, your app, some third-party tools and applying some old-fashioned manual testing.

The challenge with this – you’re likely already thinking – is that this is time consuming and labor intensive. If you are on a small software team or have a very tight deadline (or both), this level of manual testing might be a hard (but necessary) pill for your product team to swallow.

There are 3rd party services that can perform this level of testing for you. A quick google search can get you started down this path. However, if hiring a 3rd party is either logistically or financially out of reach, there are plenty of software tools that can help take the sting out of accessibility verification.

Chrome is your Friend (Really)?

As of this blog post, Google’s Chrome browser currently accounts for about 1/2 of all worldwide browser usage on the web. Chrome however is not only popular, it also has a lot of helpful tools under the hood to aid in web development.

In fact, you can use your basic, desktop version of Chrome to perform an accessibility audit of a given site you happen to have open in your browser. To try this trick out, simply do the following:

  • Open your browser to a given site you wish to audit
  • Open the Chrome developer tools console
  • Under the developer console tabs, (e.g. Network, Elements, Console, etc.) search for and click on the Audits tab.
  • Click the Perform Audit option, select Accessibility and click Run Audit to begin.

chrome a11y audit screenshot

Look what we found under the hood!

 

 

 

 

The audit tool in this case will scan the contents of the site at the current URL you have open for markup patterns that support accessibility standards (e.g. proper element naming conventions, css settings, etc.) and flag patterns within the markup on the page that violate these standards. These of course are based on the WCAG and AIRA conventions mentioned at the beginning of this post.

This is a powerful but very easy to access and run set of tools for web developers (you can perform several kinds of audits, not just ones for accessibility). So easy in fact, you might be thinking that there must be a way to program these tools (and you would be right).

Illuminating Accessibility with Lighthouse!

Once you started the audit within Chrome, you might have noticed a message stating that something called Lighthouse was starting up, prior to the audit being executed.

Lighthouse in fact, is the engine that performs the audit. It’s also available as a library dependency or even its own stand-alone project.

Lighthouse can be incorporated into your project front end or build cycle in several ways. One of those is to use the Lighthouse as a standalone analyzation tool which you can use to verify the front end of your web app post deployment (think of this as another set of functional tests with a different, specific kind of output – in this case, an accessibility report.

You can get Lighthouse running locally via command line quickly by doing the following (assuming you have NodeJS installed):

1. Open a new terminal and do, npm install -g lighthouse
2. Once the latest version of lighthouse is installed, do, lighthouse "url of choice" -GA —output html —output-path ./report.html

Once the above command executes and finishes, you can open your Lighthouse report.html locally in your web browser:

Lighthouse Report

Lighthouse Report

 

 

 

 

 

 

Drill down into the different segments of the report to see what best practices you are following for accessibilityy (or performance, SEO or other web standards) and where you should target to improve your app.

Jenkins Configuration

Jenkins configuration for Lighthouse

 

 

 

 

To include this report within your CI build process, just add the above Node command line statements to a shell script attached to your job of choice and make sure to archive the results (if configuring this in a Jenkins task, your job config will probably look something like this:)

Accessibility Linting?

By default, the report Lighthouse provides is expansive and may take a while to read through.

Fortunately, there are several options to move this form of testing further upstream. In this case, we are going to look at a small, open source project called Lightcrawler.

Lightcrawler allows you to call specific audit aspects of Lighthouse from within your NodeJS project. The benefit of doing this is that you can more specifically target what you are auditing and perform said audits before you ship your project through to the rest of your CI process.

You can install Lightcrawler into your project by simply doing npm install —save-dev lightcrawler.

Next, given your Node app is configured with a script that will generate and run the app locally, simply add this to your “scripts” portion of your app’s package.json file:

”audit:a11y”: "./node_modules/.bin/lightcrawler --url=http://localhost:8080 --config lighthouse-config.json",

With Lightcrawler you can configure an accessibility audit in a variety of ways that can meet your project needs. For the purpose of this post, we will configure Lightcrawler to run only tests classified within an accessibility audit, and to run all tests of that type currently available. Here is our configuration file:

{
  "extends": "lighthouse:default",
  "settings": {
    "crawler": {
      "maxDepth": 1,
      "maxChromeInstances": 5
    },
    "onlyCategories": [
      "Accessibility"
    ],
    "onlyAudits": [
      "accesskeys",
      "aria-allowed-attr",
      "aria-required-attr",
      "aria-required-parent",
      "aria-required-childres",
      "aria-roles",
      "aria-valid-attr-value",
      "audio-caption",
      "button-name",
      "bypass",
      "color-contrast",
      "definition-list",
      "dlitem",
      "document-title",
      "duplicate-id",
      "frame-title",
      "html-has-lang",
      "html-lang-valid",
      "image-alt",
      "input-image-alt",
      "lable",
      "layout-table",
      "link-name",
      "list",
      "list-item",
      "meta-refresh",
      "meta-viewport",
      "object-alt",
      "tabindex",
      "td-headers-attr",
      "th-has-data-cells",
      "valid-lang",
      "video-caption",
      "video-description"
    ]
  }
}

You can copy the JSON above and save it as lightcrawler.conf.js within your project directory and direct your script to the config file’s path.

Test out your newly added audit script but running npm run audit:a11y. If everything is in working order, you should receive a command line output detailing the number of audits executed by Lightcrawler and error output for any given audit tests that may have failed (and more importantly, why).

Lightcrawler Console Output

Lightcrawler Console Output

 

 

 

This may look similar to linting report output from something like Prettier or ESLint.  Similar to the way Prettier or ESLint are used in pre-commit hooks within your project, you could add a Lightcrawler script to your pre-commit hook in order to better safeguard your application code from possible accessibility violations.

Beyond Auditing and Linting!

This should give you some ideas as to how you can reduce the overall effort required to deliver your apps’ web front end while still maintaining accessibility.  By no means are the handful of tips presented here deep enough to fully cover everything under the accessibility umbrella.  That task will require significant time and effort but, in keeping with the web’s initial intent of inclusivity and connectivity, one we as software developers should strive toward.

Getting Started with Dependency Security and NodeJS

Internet security is a topic that receives more attention every day.  If you’re reading this article in early 2018, issues like Meltdown, Specter and the Equifax breach are no doubt fresh in your mind.

Cybersecurity is a massive concern and can seem overwhelming.  Where do you start?  Where do you go?  What do you do if you’re a small application team with limited resources?  How can you better engineer your projects with security in mind?

Tackling the full gestalt of this topic, including OWASP and more is beyond the scope of this article. However, if you’re involved in a small front end team, and if you leverage services like AWS, have gone through the OWASP checklist and are wondering, ‘what now?’, this article is for you (especially if you are developing in NodeJS, which we’ll focus on in this article).

Let’s Talk About Co(de) Dependency:

A tiger and her baby pigs

We’re talking about claiming dependents right?

 

 

 

 

 

 

 

 

 

 

 

One way we can quickly impact the security of our applications is through better dependency management.  Whether you develop in Python, JavaScript, Ruby or even compiled languages like Java and C#, library packages and modules are part of your workflow.

Sure, it’s 3rd party code but that doesn’t mean it can’t have a major impact on your project (everyone should remember the Leftpad ordeal from just a few years ago).

As it turns out, your app’s dependencies can be your app’s greatest vulnerability.

Managing Code you Didn’t Write:

Below, we’ll outline some steps you can take to tackle at least one facet of secure development – dependency management.  In this article, we’ll cover how you can implement a handful of useful tools into a standard NodeJS project to protect, remediate and warn against potential security issues you may introduced into your application via its dependency stack.

Your code has issues

We’ve all had to deal with other people’s problematic code at one time or another

 

 

 

 

 

 

 

 

 

 

Using Dependency-Check:

Even if you’re not using NodeJS, if you are building any project that inherits one or more libraries, you should have some form of dependency checking in place for that project.

For NodeJS apps, Dependency-Check is the easiest, lowest-hanging fruit you can likely reach for to better secure your development process.

Dependency-Check is a command-line tool that can scan your project and warn you of any modules being included in your app’s manifest but not actually being utilized in any functioning code (e.g. modules listed in your package.json file but never ‘required’ in any class within your app).  After all, why import code that you do not require?

Installation is a snap.  From your project directory you can:

Npm install –g dependency-check

dependency-check package.json –unused

If you have any unused packages, Dependency-Check will warn you via a notice such as:

Fail! Modules in package.json not used in code: chai, chai-http, mocha

Armed with a list of any unused packages in hand, you can prune your application before pushing it to production.  This tool can be triggered within your CI process per commit to master or even incorporated into your project’s pre-commit hooks to scan your applications further upstream.

RetireJS:

‘What you require, you must retire’ as the saying goes – at least as it does when it comes to RetireJS.  Retire is a suite of tools that can be used for a variety of application types to help identify dependencies your app has incorporated that have known security vulnerabilities.

Retire is suited for JavaScript based projects in general, not just the NodeJS framework.

For the purpose of this article, and since we are dealing primarily with the command line, we’ll be working with the CLI portion of Retire.

npm install -g retire

then from the root of your project just do:

retire 

If you are scanning a NodeJS project’s dependencies only, you can narrow the scan by using the following arguments:

retire  —nopdepath 

By default, the tool will return something like the following, provided you have vulnerabilities RetireJS can identify:

ICanHandlebarz/test/jquery-1.4.4.min.js 
↳ jquery 1.4.4.min has known vulnerabilities: 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969 
http://research.insecurelabs.org/jquery/test/ 
&lt;a href="http://bugs.jquery.com/ticket/11290"&gt;
http://bugs.jquery.com/ticket/11290</a>

Retire scans your app and compares packages used within it with an updated online database of packages/package versions that have been identified by the security world as having exploitable code.

Note the scan output above identified that the version of jquery we have imported via the ICanHandlebarz module has a bug that is exploitable by hackers (note the link to CVE-2011-4969 – the exploit in question).

Armed with this list, we can identify any unnecessary dependencies within our app, as well as any exploits that could be utilized given the code we are currently importing and using.

NSP:

There are a multitude of package scanning utilities for NodeJS apps out on the web. Among them, NSP (Node Security Platform) is arguably the most popular.

Where the previously covered Retire is a dependency scanner suited for JavaScript projects in general, NSP, as the name implies, is specifically designed for Node applications.

The premise is identical: this command line tool can scan your project’s package manifest and identify any commonly known web exploits that can be leveraged given the 3rd party packages you have included in your app.

Utilizing NSP and Retire may sound redundant but, much like a diagnosing a serious condition via medical professional, it’s often worth seeking a second opinion.  It’s also equally easy to get up and running with NSP:

npm install -g nsp

nsp check —reporter &lt;Report type (e.g. HTML)&gt;

Running the above within the root of your node application will generate a report in your chosen output format

Again, wiring this up into a CI job should be straightforward.  You can even perform back-to-back scans using both Retire and NSP.

Snyk:

Yes, this is yet another dependency scanner – and like NSP, another one that specifically NodeJS oriented but with an added twist: Snyk.

Snyk home page

Snyk home page

 

 

 

 

 

 

 

 

 

With Retire and NSP, we can quickly and freely generate a list of vulnerabilities our app is capable of having leveraged against it.  However, what of remediation?  What if you have a massive Node project that you may not be able to patch or collate dependency issues on quickly?  This is where Snyk comes in handy.

Snyk can generate detailed and presentable reports (perfect for team members who may not be elbow deep in your project’s code).  The service also provides other features such as automated email notifications and monitoring for your app’s dependency issues.

Typical Snyk report

Typical Snyk report

 

 

 

 

 

 

 

 

 

 

 

Cost: Now, these features sound great (they also sound expensive).  Snyk is a free service, depending on your app’s size.  For small projects or open source apps, Snyk is essentially free.  For teams with larger needs, you will want to consult their current pricing.

To install and get running with Snyk, first visit https://snyk.io and register for a user account (if you have a private or open source project in github or bitbucket, you’ll want to register your Snyk account with your code management tool’s account.

Next, from your project root in the command line console:

npm install -g snyk

snyk –auth

Read through the console prompt.  Once you receive a success message, your project is now ready to report to your Snyk account.  Next:

Snyk test

Once Snyk is finished you will be presented with a URL will direct you to a report containing all vulnerable dependency information Snyk was able to find regarding your project.

The report is handy in that it not only identifies vulnerabilities and the packages they’re associated with but also steps for remediation (e.g. – update a given package to a specific version, remove it, etc.).

Snyk has a few more tricks up its sleeve as well.  If you wish to dive headlong into securing your application. Simply run:

Snyk wizard

This will rescan your application in an interactive wizard mode that will walk you through each vulnerable package and prompt you for actions to take for each vulnerable package.

Note, it is not recommended to use Snyk’s wizard mode when dealing with large scale applications due to it being very time consuming.

It will look like this:

Snyk monitor output

Snyk monitor output

 

 

 

 

Additionally, you can utilize Snyk’s monitoring feature which, in addition to scanning your application, will send email notifications to you or your team when there are updates to vulnerable packages associated with your project.

Putting it all together in CI:

Of course we’re going to arrange these tools into some form a CI instance.  And why not?  Given we have a series of easy-to-implement command line tools, adding these to part of our project’s build process should be straight forward.

Below is an example of a shell script we added as a build step in a Jenkins job to install and run each of these scan tools as well as output some of their responses to artifacts our job can archive:

#Installs our tools
npm install -g snyk
npm install -g dependency-check
npm install -g retire

#Runs dependency-check and outputs results to a text file
dependency-check ./package.json >> depcheck_results.txt

#Runs retire and outputs results to a text file
retire --nodepath node_modules >> retire_results.txt

#Authenticates use with Snyk
snyk auth $SNYK_TOKEN 

#Runs Snyk's monitor task and outputs results to a test file 
snyk monitor >> snyk_raw_monitor.txt

#Uses printf, grep and cut CLI tools to retrieve the Snyk report URL
#and saves that URL to a text file
printf http: >> snykurl.txt
grep https snyk_raw_monitor.txt | cut -d ":" -f 2 >> snykurl.txt
cat snykurl.txt

#Don't forget the set your post-job task in your CI service to save the above .txt
#files as artifacts for our security check run.

Note – depending on how you stage your application’s build lifecycle in your CI service, you may want to break the above script up into separate scripts within the job or separate jobs entirely. For example, you may want to scan your app using Dependency-Check for every commit but save your scan of your application using NSP or Snyk for only when the application is built nightly or deployed to your staging environment.  In which case, you can divide this script accordingly.

Note on Snyk:

In order to use Snyk in your CI instance, you will need to reference your Snyk authentication key when executing this tool (see the API key reference in the script).

This key can be passed as a static string to the job.  Your auth key for Snyk can be obtained by logging in at Snyk.io and retrieving your API key from the account settings page after clicking on the My Account link from Snyk’s main menu.

Next Steps:

This is only the tip of the iceberg when it comes to security and web application development.  Hopefully, this article has given you some ideas and hits of where to start in writing more secure software.  For more info on how to secure your Node-based apps, here’s some additional reading to check out:

Creating a Realtime Reactive App for a collaborative domain

Sampling is a Bazaarvoice product that allows consumers to join communities and claim a limited amount of free products. In return consumers provide honest & authentic product reviews for the products they sample. Products are released to consumers for reviews at the same time. This causes a rush to claim these products. This is an example of a collaborative domain problem, where many users are trying to act on the same data (as discussed in Eric Evens book Domain-Driven design).

 

Bazaarvoice ran a 2 day hackathon twice a year. Employees are free to use this time to explore any technologies or ideas they are interested in. From our hackathon events Bazaarvoice has developed significant new features and products like our advertising platform and our personalization capabilities. For the Bazaarvoice 2017.2 hackathon, the Belfast team demonstrated a solution to this collaborative domain problem using near real-time state synchronisation.    

Bazaarvoice uses React + Redux for our front end web development. These libraries use the concepts of unidirectional data flows and immutable state management. These mean there is always one source of truth, the store, and there is no confusion about how to mutate the application state. Typically, we use the side effect library redux-thunk to synchronise state between server and client via HTTP API calls. The problem here is that, the synchronisation one way, it is not reactive. The client can tell the server to mutate state, but not vice versa. In a collaborative domain where the data is changing all the time, near-real time synchronisation is critical to ensure a good UX.

To solve this we decided to use Google’s Firebase platform. This solution provided many features that work seamlessly together, such as OAuth authentication, CDN hosting and Realtime DB. One important thing to note about Firebase, it’s a backend as a service, there was no backend code in this project.

The Realtime Database provides a pub/sub model on nodes of the database, this allows clients to be always up-to-date with the latest state. With Firebase Realtime DB there is an important concept not to be overlooked, data can only be accessed by it’s key (point query).

You can think of the database as a cloud-hosted JSON tree. Unlike a SQL database, there are no tables or records. When you add data to the JSON tree, it becomes a node in the existing JSON structure with an associated key (Reference)

Hackathon Goals

  1. Realtime configurable UI
  2. Realtime campaign administration and participation
  3. Live Demo to the whole company for both of the above

Realtime configurable UI

During the hackathon demo we demonstrated updating the app’s style and content via the administration portal, this would allow clients to style the app to suite their branding. These updates were pushed in real time to 50+ client devices from Belfast to Austin (4,608 miles away). 

The code to achieve this state synchronisation across clients was deceptively easy!

Given the nature of react, once a style config update was received, every device just ‘reacted’.

 

Realtime campaign administration and participation

In the demo, we added 40 products to the live campaign. This pushed 40 products to the admin screen and to the 50+ mobile app. Participants were then instructed to claim items.

Admin view

Member view

All members were authenticated via OAuth providers (Facebook, Github or Gmail).

To my surprise the live demo went without a hitch. I’m pleased to add…. my team won the hackathon for the ‘Technical’ category.

Conclusion

Firebase was a pleasure to work with, everything worked as expected and it performed brilliantly in the live demo….even on their free tier. The patterns used in Firebase are a little unconventional for the more orthodox engineers, but if your objective is rapid development, Firebase is unparalleled by any other platform. Firebase Realtime Database produced a great UX for running Sampling campaigns. While Firebase will not be used in the production product, it provided great context for discussions on the benefits and possibilities of realtime data synchronisation.

Some years ago, I would have maintained that web development was the wild west of software engineering; solutions were being developed without discipline and were lacking a solid framework to build on. It just didn’t seem to have any of the characteristics I would associate with good engineering practices. Fast forward to today, we now have a wealth of tools, libraries and techniques that make web development feel sane.

In recent years front-end developers have embraced concepts like unidirectional dataflows, immutability, pure functions (no hidden side-effects), asynchronous coding and concurrency over threading. Im curious to see if these same concepts gain popularity in backend development as Node.js continues to grow as backend language.

Event Stream Modeling

Recently, during a holiday lull, I decided to look at another way of modeling event stream data (for the purposes of anomaly detection).

I’ve dabbled with (simplistic) event stream models before but this time I decided to take a deeper look at Twitter’s anomaly detection algorithm [1], which in turn is based (more or less) on a 1990 paper on seasonal-trend decomposition [2].

To round it all off, and because of my own personal preference for on-line algorithms with minimal storage and processing requirements, I decided to blend it all with my favorite on-line fading-window statistics math.  On-line algorithms operate with minimal state and no history, and are good choices for real-time systems or systems with a surplus of information flowing through them (https://en.wikipedia.org/wiki/Online_algorithm)

The Problem

The high-level overview of the problem is this: we have massive amounts of data in the form of web interaction events flowing into our system (half a billion page views on Black Friday, for example), and it would be nice to know when something goes wrong with those event streams. In order to recognize when something is wrong we have to be able to model the event stream to know what is right.

A typical event stream consists of several signal components:  a long-term trend (slow increase or decrease in activity over a long time scale, e.g. year over year), cyclic activity (regular change in behavior over a smaller time scale, e.g. 24 hours), noise (so much noise), and possible events of interest (which can look a lot like noise).

This blog post looks at extracting the basic trend, cycle, and noise components from a signal.

Note: The signals processed in this experiment are based on real historical data, but are not actual, current event counts.

The Benefit

The big payback of having a workable model of your system is that when you compare actual system behavior against expected behavior, you can quickly identify problems — significant spikes or drops in activity, for example.  These could be from DDOS attacks, broken client code, broken infrastructure, and so forth — all of which are better to detect sooner than later.

Having accurate models can also let you synthesize test data in a more realistic manner.

The Basic Model

Trend

A first approximation of modeling the event stream comes in the form of a simple average across the data, hour-by-hour:

The blue line is some fairly well-behaved sample data (events per hour) and the green lines are averages across different time windows. Note how the fastest 1-day average takes on the shape of the event stream but is phase-shifted; it doesn’t give us anything useful to test against. The slowest 21-day average gives us a decent trend line for the data, so if we wanted to do a rough de-trending of the signal we could subtract this mean from the raw signal.  Twitter reported better results using the median value for de-trending but since I’m not planning on de-trending the signal in this exploration, and median calculations don’t fit into my lightweight on-line philosophy, I’m going to stay with the mean as trend.

Cycle

While the fast 1-day average takes on the shape of the signal, it is phase shifted and is not a good model of the signal’s cyclic nature. The STL seasonal-trend technique(see [2]) models the cyclic component of the signal (“seasonal” in the paper, but our “season” is a 24-hour span) by creating multiple averages of the signal, one per cycle sub-series.  What this means for our event data is that there will be one average for event counts from 1:00am, another average for 2:00am, and so forth, for a total of 24 running averages:

The blue graph in Figure 2a shows the raw data, with red triangles at “hour 0″… these are all averaged together to get the “hour 0” averages in the green graph in Figure 2b.  The green graph is the history of all 24 hourly sub-cycle averages.  It is in-phase with and closely resembles the raw data, which is easier to see in Figure 2c where the signal and cyclic averages are super-imposed.

A side effect of using a moving-window average is that the trend information is automatically incorporated into the cyclic averages, so while we can calculate an overall average, we don’t necessarily need to.

Noise

The noise component is the signal minus the cyclic model and the seasonal trend. Since our online cyclic model incorporates trending already, we can simple subtract the two signals from Figure 2c to give us the noise component, shown in Figure 3:

Some Refinements

Now that we have a basic model in place we can look at some refinements.

Signal Compression

One problem with the basic model is that a huge spike in event counts can unduly distort the underlying cyclic model. A mild version of this can be seen in Figure 4a, where the event spike is clearly reflected in the model after the spike:

By taking a page out of the audio signal compression handbook, we can squeeze our data to better fit within a “standard of deviation”. There are many different ways to compress a signal, but the one illustrated here is arc-tangent compression that limits the signal to PI/2 times the limit factor (in this case, 4 times the standard deviation of the signal trend):

There is still a spike in the signal but its impact on the cyclic model is greatly reduced, so post-event comparisons are made against a more accurate baseline model.

Event Detection

Now we have a signal, a cyclic model of the signal, and some statistical information about the signal such as its standard deviation along the hourly trend or for each sub-cycle.

Throwing it all into a pot and mixing, we choose error boundaries that are a combination of the signal model and the sub-cycle standard deviation and note any point where the raw signal exceeds the expected bounds:

Event Floor

You may note in Figure 5 how the lower bound goes negative.  This is silly – you can’t have negative event counts. Also, for a weak signal where the lower bound tends to be negative, it would be hard to notice when the signal fails completely – as it’s still in bounds.

It can be useful to enforce an event floor so we can detect the failure of a weak signal. This floor is arbitrarily chosen to be 3% of the cyclic model (though it could also be a hard constant):

The End

Playing with data is a ton of fun, so all of the Python code that generated these examples (plus a range of sample data) can be found in GitHub here:

https://github.com/bazaarvoice/event_monitor_test

References

For your reading pleasure:

[1] Automatic Anomaly Detection in the Cloud Via Statistical Learning

[2] STL: A Seasonal-Trend Decomposition Procedure Based on Loess

Telling Your Data To “Back Off!” (or How To Effectively Use Streams)

Foreword

Our Curations engineering team makes heavy use of serverless architecture. While this typically gives us the benefit of reduced costs, flexibility, and rapid development, it also requires us to ensure that our processes will run within the tight memory and lifecycle constraints of serverless instances.

In this article, I will describe an actual case where a scheduled job had started to fail, the discovery of the root cause and the refactor that resolved the issue. I will assume you have at least a rudimentary knowledge of Node JS and Amazon Web Services.

Content Export

Months previously, we built a scheduled job using AWS Lambdas that would kick off an export of our social content every 24 hours. In a nutshell, its purpose was to:

  1. Query for social content documents stored in a MongoDB server.
  2. Transform each document to a row in CSV format.
  3. Write out the CSV rows to a file in S3.

The resulting CSV files were meant for ingestion into an analytics tool to measure web site conversion impact.

It all worked fine for months. Then recently, this job began to fail.

Drowning in Content

For your viewing pleasure, here is an excerpted, condensed section of the main code:

function runExport(query) {
  MongoClient.connect(MONGODB_URL)
    .then((db) => {
      let exportCount = 0;
      db
        .collection('content')
        .find(query)
        .forEach(
          // iterator function to perform on each document
          (content) => {
            // transform content doc to a csv row, then push to S3 writer
            s3.write(csvTransform(content));
            exportCount++;
          },
          // done function, no more documents
          () => {
            db.close();
            s3.close();
            logger.info(`documents exported: ${exportCount}`);
          }
        );
    });
}

Each document from the database query is transformed into a CSV row, then pushed to our S3 writer which buffered all content until an s3.close(). At this point, the CSV data would be flushed out to the S3 destination bucket.

It was obvious from this implementation that heap usage would grow unbounded, as documents from the database would be loaded into resident memory as CSV data. As we aggregated content over the previous months, the data pushed into our export process began to generate frequent “out of heap memory” errors in our logs.

Memory Profile

To gain visibility into the memory usage, I wrote a quick and dirty MemProfiler class whose purpose was to sample memory usage using process.memoryUsage(), collecting the maximum heap used over the process lifetime.

class MemProfiler {
  constructor(collectInterval) {
    this.heapUsed = {
      min : null,
      max : null,
    };
    if (collectInterval) {
      this.start(collectInterval);
    }
  }

  start(collectInterval = 500) {
    this.interval = setInterval(() => this.sample(), collectInterval);
  }

  stop() {
    if (this.interval) {
      clearInterval(this.interval);
    }
  }

  reset() {
    this.stop();
    this.heapUsed = {
      min : null,
      max : null,
    };
  }

  sample() {
    const mem = process.memoryUsage();
    if (this.heapUsed.min === null || mem.heapUsed < this.heapUsed.min) {
      this.heapUsed.min = mem.heapUsed;
    }
    if (this.heapUsed.max === null || mem.heapUsed > this.heapUsed.max) {
      this.heapUsed.max = mem.heapUsed;
    }
  }
}

I then added the MemProfiler to the runExport code:

function runExport(query) {
  memProf.start();
  MongoClient.connect(MONGODB_URL)
    .then((db) => {
      let exportCount = 0;
      db
        .collection('content')
        .find(query)
        .forEach(
          (content) => {
            // transform content doc to a csv row, then push to S3 writer
            s3.write(csvTransform(content));
            exportCount++;
            memProf.sample();
          },
          () => {
            db.close();
            s3.close();
            logger.info(`documents exported: ${exportCount}`);
            console.log('heapUsed:');
            console.log(` max: ${memProf.heapUsed.max/1024/1024} MB`);
            console.log(` min: ${memProf.heapUsed.min/1024/1024} MB`);
          }
        );
    });
}

Running the export against a small set of 16,900 documents yielded this result:

documents exported: 16900
heapUsed:
max: 30.32764434814453 MB
min: 8.863059997558594 MB

Then running against a larger set of 34,000 documents:

documents exported: 34000
heapUsed:
max: 36.204505920410156 MB
min: 8.962921142578125 MB

Testing with additional documents increased the heap memory usage linearly as we expected.

Enter the Stream

The necessary code rewrite had one goal in mind: ensure the memory profile was constant, whether a total of one document was processed or a million. Since this was a batch process, we could trade off memory usage for processing time.

Let’s reiterate the primary operations:

  1. Fetch a document
  2. Transform the document to a CSV row
  3. Write the CSV row to S3

If I may present an analogy, we have widgets moving down the assembly line from one station to the next. The conveyor belt on which the widgets are transported is moving at a constant rate, which means the throughput at each station is constant. We could increase the speed of that conveyor belt, but only as fast as can be handled by the slowest station.

In Node JS, streams are the assembly line stations, and pipes are the conveyor belt. Let’s examine the three primary types of streams:

  1. Readable streams: This is typically an upstream data source that feeds data to a writable or transform stream. In our case, this is the MongoDB find query.
  2. Transform streams: This typically takes upstream data, performs a calculation or transformation operation on it and feeds it out downstream. In our case, this would take a MongoDB document and transform it to a CSV row.
  3. Writable streams: This is typically the terminating point of a data flow. In our case this is the writing of a CSV row to S3.

Readable Stream

Conveniently, the MongoDB Node JS driver’s find() function returns a Cursor object that extends the Node JS Readable stream. How convenient!

const streamContentReader = db.collection('content').find(query);

Transform Stream

All we need to do is extend the Transform stream class, and override the _transform() method to transform the content to a CSV row, push it out the other end, and signal upstream that it is ready for the next content.

class CsvTransformer extends Transform {
  constructor(options) {
    super(options);
  }

  _transform(content, encoding, callback) {
    // write row
    const { _id, authoredAt, client, channel, mediaType, permalink , text, textLanguage } = content;
    const csvRow = `${_id},${authoredAt},${client},${channel},${mediaType},${permalink}${text},${textLanguage}\n`;
    this.push(csvRow);
    // signal upstream that we are ready to take more data
    callback();
  }
}

Writable Stream

Let’s purposely mock the S3 Writer for now. It does nothing here, but realistically, we would buffer up incoming data, then flush out the buffer in chunks over the network for best throughput.

class S3Writer extends Writable {
  constructor(options) {
    super(options);
  }

  _write(data, encoding, callback) {
    // TODO: probably use a fixed size buffer
    // that we write to and then flush out to S3 using
    // multipart data upload
    callback();
  }
}

Pipe ‘Em Up

We now create the three streams:

const streamContentReader = db.collection('content').find(query);
const streamCsvTransformer = new CsvTransformer({ objectMode: true });
const streamS3Writer = new S3Writer();

Streams alone do nothing. What makes them useful is connecting the output from one stream to the input of another. In stream terminology, this is called piping, and is accomplished using the stream’s pipe method:

streamContentReader.pipe(streamCsvTransformer).pipe(streamS3Writer)

Although the native pipe method is perfectly functional, I highly recommend using the pump library. In addition to being more readable by passing in an array of streams to pipe together, we can also invoke then() when the pipe has finished, as well as handle close or error events emitted by the streams:

pump([ streamContentReader, streamCsvTransformer, streamS3Writer ])
  .then(() => {
    console.log(`documents exported: ${exportCount}`);
  })
  .catch((err) => console.error(err));

CSV and S3 Write Streams

I purposely did not implement the S3Writer above, because npm is a treasure trove of solutions! The s3-write-stream library will take care of buffering, using multipart upload and handling retries, so we don’t have to get our hands too dirty. And wouldn’t you know it, there is also the csv-write-stream library that will generate properly escaped CSV rows.

As an exercise, you the reader may want to try using the s3-write-stream and csv-write-stream. Trust me, once you get the hang of streaming, you will enjoy it!

Back Off!

Let’s touch back on the issue of memory usage.

What if a write stream becomes blocked for a period of time? Maybe we are writing data to a third-party service over HTTP and network congestion or service throttling is slowing the write rate. The readable stream will happily pump data in as fast as it can. Won’t that cause increased heap usage as all that data gets backed up and buffered?

The quick answer is “no”. In the implementation of the _write() method of your writable stream, when done with the data chunk (after writing to a file for example), call callback(). This signals to the incoming stream that it is ready to receive the next chunk. You can also provide a highWaterMark to the writable stream constructor to give a specific number of objects to buffer before pausing the incoming stream. This is all handled internally by the Node JS streams implementation. No more unbounded buffering!

For a deep dive into the concept of backpressure control, read this great article on Backpressuring in Streams.

Stream ‘Em If You Got ’em!

Our team is now in the process of utilizing Node JS streams whenever we read in volumes of data, process that data, then write out the results. We are now seeing great gains in both stability and maintainability!

References

Node JS Streams APIhttps://nodejs.org/api/stream.html

Article: Backpressuring in Streams: https://nodejs.org/en/docs/guides/backpressuring-in-streams/

MongoDB Cursor Streamhttp://mongodb.github.io/node-mongodb-native/2.2/api/Cursor.html

Pump Modulehttps://www.npmjs.com/package/pump

CSV Write Stream Module: https://www.npmjs.com/package/csv-write-stream

S3 Write Stream Module: https://www.npmjs.com/package/s3-write-stream

 

Testing for Application Front End Performance with Web Page Test

taco-meter

‘Taco-meter’ – a device used to measure how quickly you can obtain tacos from your current location

If you’ve followed Bazaarvoice’s R&D blog, you’ve probably read some of our posts on web application performance testing with tools like Jmeter here and here.

In this post, we’ll continue our dive into web app performance, this time, focusing on testing front end applications.

API Response Time vs App Usability:

Application UI testing in general can be challenging as the cost of doing so (in terms of both time, labor and money) can become quite expensive.

In our previous posts highlighted above, we touched on testing RESTful APIs – which revolves around sending a set of requests to a target, measuring their response times and checking for errors.

For front-end testing, you could do the same – automate requesting the application’s front-end resources and then measuring their collective response times.  However, there are additional considerations for front end testing that we must account for – such as cross-browser issues, line speed constraints (i.e. broadband vs. a mobile 3G) as well as client side loading and execution (especially important for JavaScript).

Tools like Jmeter can address some of these concerns, though you may be limited in what can be measured.  Plugins for Jmeter that allow Webdriver API access exist but can be difficult to configure, especially in a CI environment (read through these threads from Stack Overflow on getting headless Webdriver to work in Jenkins for example).

There are 3rd party services available that are specialized for UI-focused load tests (e.g. Blazemeter) but these options can sometimes be expensive.  This may be too much for some teams in terms of time and money – especially if you’re looking for a solution for introductory, exploratory testing as opposed to something larger in scale.

Webpagetest.org:

Webpagetest.org is an open source project backed by multiple partners such as Google, Akamai and Fastly, among others.  You can check out the project’s github page here.

Webpagetest.org is a publicly available service.  Its main goal is to provide performance information for web applications, allowing designers and engineers to measure and improve application speed.

Let’s check out Webpagetest.org’s public facing page – note the URL form field.  We can try out the service’s basic reporting features by pasting the URL of a website or application into the field and submitting the form.  In this case, let’s test the performance of Bazaarvoice’s home page.

Before submitting the form however, note some of the additional options made available to us:

  • Browser type
  • Location
  • Number of tests
  • Connection

webpagetest.org form

Webpagetest.org’s initial submission form

This service allows us to configure our tests to simulate user requests from a slow, mobile, 3G device in the US to a desktop client with a fiber optic connection in Singapore (just to name a few).

Go ahead and submit the test form.  Once the test resolves, you’ll first notice the waterfall snapshot Webpagetest.org returns.  This is a reading of the application’s resources (images, JavaScript execution, external API calls, the works).  It should look very familiar if you have debugged a web site using a browser’s developer tools console.

webpagetest waterfall

Webpagetest.org’s waterfall report

This waterfall readout gives us a visual timeline of all the assets our web page must load for it to be active for the end user, in the order they are loaded, while recording the time taken to load and/or execute each element.

From here, we can determine which resources which have the longest load times, the largest physical foot prints and give us a clue as to how we might better optimize the application or web page.

Webpagetest.org’s content breakdown chart

In addition to our waterfall chart, our test results can provide any more data points, including but not limited to:

  • Content breakdown by asset type
  • CPU utilization
  • Bandwidth utilization
  • Screen shots and video capture of page load

Armed with this information from our test execution, you can begin to see where we might decide to make adjustment to our site to increase client-side performance.

Getting Programmatic:

Webpagetest.org’s web UI provides some great, easy-to-use features.  However, we can automated these tests using Webpagetest.org’s API.

In our next couple of steps, we’ll demonstrate how to do just that using Jenkins, some command line tools and Webpagetest.org’s public API, which you can read more about here.

Before Going Forward:

There are a few limitations or considerations regarding using Webpagetest.org and its API:

  • All test results are publicly viewable
  • API usage requires you to submit an email address to generate an API key
  • API requests are limited to 200 test runs per day
  • The maximum number of concurrent test runs allowed is 9

If test result confidentiality or scalability is a major concern, you may want to skip to the end of this article where we discuss using private instances of Webpagetest.org’s service.

Requirements:

 As mentioned above, we need an API key from Webpagetest.org. Just visit https://www.webpagetest.org/getkey.php and submit the form made available there.  You’ll then receive an email containing your API key.

Once you have your API key – lets set up a test using as CI service.  For this, like in our other test tutorials, we’ll be using Jenkins.

For this walkthrough, we’ll use the following:

  • Webpagetest’s public API
  • Jenkins
  • NodeJS (be sure your Jenkins instance has at least 1 server capable of using NodeJS 6 or above)
  • Webpagetest (a node module that provides a command-line wrapper for their API)
  • Jq-cli (a node module that provides JQ – a JSON query tool)
  • Wget-improved (a node implementation of the wget CLI tool – skip this if your Jenkins servers already have this binary installed).

Setting Up the Initial Job:

First, let’s get the Jenkins job up and running and make sure we can use it to reach our primary testing tool.

Do the following:

  • Log into Jenkins
  • Navigate to a project folder you wish your job to reside in
  • Click New Item
  • Give the job a name, select Freestyle Project from the menu and click OK

Now that the job is created, let’s set a few basic parameters:

  • Check ‘Discard old builds’
  • Set the Max # of Builds value to 100
  • Check the ‘Provide Node & npm bin/ folder to PATH’ option then choose your desired Node version (if this option is unavailable, see your system administrator).

jenkins-node

Specifying a NodeJs version for Jenkins to use

Next, let’s create a temporary build step to verify our job can reach Webpagetest.org

  • Click the ‘Add Build Step’ menu button
  • Select ‘Execute shell’
  • In the text box that appears, past in the following Curl command:
    •  curl http://webpagetest.org -s –f 
  • Save changes to your job and click the build option on the main job settings screen

Once the job has executed, click on the build icon and select the console output option.  Review the console output captured by Jenkins.  If you see HTML output captured at the end of the job, congratulations, your Jenkins instance can talk to Webpagetest.org (and its API).

Now we are ready to configure our actual job.

The Task at Hand:  

 The next steps will walk you through creating a test script.  The purpose of this script is as follows:

  • Install a few command line tools in our Jenkins instance when executed
  • Use these tools to execute a test using Webpagetest.org’s API
  • Determine when our test execution is complete
  • Collect and preserve our results in the following manner:
    • HAR (HTTP Archive)The entire test result data in JSON format
    • A PNG of our application’s waterfall
  • Filter our results for a specific element’s statistic we wish to know about

Adding Tools:

Click on the configure link for your job and scroll to the shell execution window.  Remove your curl command and copy the following into the window:

npm install -g webpagetest
npm install -g jq-cli-wrapper
npm install -g wget-improved

If you know that your Jenkins environments comes deployed with wget already installed, you can skip the last npm install command above.

Once this portion of the script executes, we’ll now have a command line tool to execute tests against Webpagetest’s API.  This wrapper makes it somewhat easier to work with the API, reducing the amount of syntax we’ll need for this script.

Next in our script, we can declare this command that will kick off our test:

webpagetest test $SITE -k $APIKEY -r $RUNS &gt; test_exec.json
&lt;span style="font-size: 1rem;"&gt;testId=$(cat test_exec.json | jq .data.testId)
&lt;/span&gt;testId="${testId%\"}"
testId="${testId#\"}"
echo $testId

This script block calls the API, passing it some variables (which we’ll get to in a minute) then saves the results to a JSON file.

Next, we use the command line tools cat and jq to sort our response file, assigning the testId key value from that file to a shell variable.  We then use some string manipulation to get rid of a few extraneous quotation marks in our variable.

Finally, we print the testId variable’s contents to the Jenkins console log.  The testId variable in this case should now be set to a unique string ID assigned to our test case by the API.  All future API requests for this test will require this ID.

Adding Parameters:

Before we go further, let’s address the 3 variables we mentioned above.  Using shell variables in the confines of the Jenkins task will give us a bit more flexibility in how we can execute our tests.  To finish setting this part up, do the following:

  • In the job configuration screen, scroll to and check the ‘This build is parameterized’ option.
  • From the dropdown menu, select String Parameter
  • In the name field, enter SITE
  • For the default field, enter a URL you wish to test (e.g. http://www.bazaarvoice.com)
  • Repeat this process for the APIKEY and RUNS variables
  • Set the RUNS parameter to a value of 1
  • Set the default value for the APIKEY parameter to your Webpagetest.org API key.

Save changes to your job and try running it.  See if it passes.  If you view the console output form the run, there’s probably not much you’ll find useful.  We have more work to do.

site-parameter

Your site parameter should look like this

Your runs parameter should look like this

And your key parameter should looks like this

Waiting in Line:

The next portion of our script is probably the trickiest part of this exercise.  When executing a test with the Webpagetest.org web form, you probably noticed that as your test run executes, your test is placed into a queue.

waiting

Cue the Jeopardy! theme…

Depending current API usage, we won’t be able to tell how long we may be waiting in line for our test execution to finish.  Nor will we know exactly how long a given test will take.

To account for that, next we’ll have to modify our script to poll the API for our test’s status until our test is complete.  Luckily, Webpagetest.org’s API provides a ‘status’ endpoint we can utilize for this.

Add this to your script:

webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)

Using our API wrapper for webpage test and jq, we’re posting a request to the API’s status endpoint, with our test ID variable passed as an argument.  The response then is saved to the file, status.json.

Next, we use cat and jq to filter the content of status.json, returning the contents of the ‘testsCompleted’ (which is a child of the ‘data’ node) within that file.  This gets assigned to a new variable – testStatus.

Still Waiting in Line:

 

long line

Just like going to the DMV

The response returned when polling the status endpoint contains a key called ‘status’ with possible values set to ‘In progress’ or ‘Completed’.  This seems like the perfect argument we would want to check to monitor our test status.However, in practice, this key-value pair can be set prematurely – returning a value of ‘Completed’ when not necessarily every run within our test set has finished.  This can result in our script attempting to retrieve an unfinished test result – which would be partial or empty.

After some trial and error, it turns out that reading the ‘testsCompleted’ key from the status response is a more accurate read as to the status of our tests.  This key contains an integer value equal to the number of test runs you specified that have completed execution.

Add this to your script:

while [ "$testStatus" != "$RUNS" ]
do
sleep 10
webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)
done

This loop compares the number of tests completed with the number of runs we specified via our Jenkins string variable, $RUNS.

While those two variables are not equal, we request our test’s status, overwrite the status.json file with our new response, filter that file again with jq and assign our updated ‘testsCompleted’ value to our shell variable.

Note that the 10 second sleep command within the loop is optional.  We added that as to not overload the API with requests every second for our test’s status.

Once the condition is met, we know that the number of tests runs we requested should be the same number completed. We can now move on.

Getting The Big Report:

Once we have passed our while loop, we can assume that our test has completed.  Now we simply need to retrieve our results:

webpagetest har $testId -o results.har

webpagetest results $testId -o results.json

Webpagetest’s API provides multiple options to retrieve result information for any given test.  In this case, we are asking the API to give us the test results via HAR format.

Note – if you wish to deliver your test results in HAR format, you will need a 3rd party application or service to view .har files.  There are several options available including stand-alone applications such as Charles Proxy and Har Viewer.

Additionally, we are going to retrieve the test results from our run using the API’s results endpoint and assign this to a JSON file – results.json.

Now would be a great time to save the changes we’ve made to your test.

Getting the Big Picture:

Next, we want to retrieve the web application’s waterfall image – similar to what was returned when executing our test via the Webpagetest.org web UI.

To do so, add this to your script:

waterfallImage=$(cat results.json | jq '.data.runs."1".firstView'.images.waterfall)
waterfallImage="${waterfallImage%\"}"
waterfallImage="${waterfallImage#\"}"
nwget $waterfallImage -O waterfall.png

Since we’ve already retrieved our test results and saved them to the results.json file, we simply need to filter this file for a key-value pair that contains the URL for where our waterfall snapshot resides online.

In the results.json file, the main parent that contains all test run information is the ‘data’ node.  Within that node, our results are divided up amongst each test run.  Here we will further filter our results.json object based on the 1st test run.

Within that test run’s ‘firstView’ node, we have an images node that contains our waterfall image.

We then assign the value of that node to another shell variable (and then use some string manipulation to trim off some unnecessary leading and ending quotation characters).

Finally, as we installed the nwget module at the beginning of this script, we invoke it, passing the URL of our waterfall image as an argument and the option to output the result to a PNG file.

Archiving Results:

Upon each execution, we wish to save our results files so that we can build a collection of historical data as we continue to build and test our application.  Doing this within Jenkins is easy

Just click on the ‘Add Post Build Action’ button within the Jenkins job configuration menu and select ‘Archive the artifacts’ from the menu.

In the text field provided for this new step, enter ‘*.har, results.json, waterfall.png’ and click save.

Now, when you run your test job – once the script succeeds, it will save each instance of our retrieved HAR, waterfall image and results.json file to each respective Jenkins run.

Once you save your job configuration, click the ‘Build with Parameters’ button and try running it.  Your artifacts should be appended to each job run once it completes.

results

Collecting your results in Jenkins

Like a Key-Value Pair in a JSON Haystack:

Next, we’re going to talk about result filtering.  The results.json provided initially by the API is exhaustive in size.  Go ahead and scroll through that file saved from your last successful test run (you will probably be scrolling for a while).  There’s a mountain of data there but let’s see if we can fish some information out of the pile.

The next JSON-relative trick we’re going to show you is how you can filter the results.json from Webpagetest.org down to a specific object you wish to measure statistics for.

The structure of the results.json lists every web element (from .js libraries to images, to .css files) called during the test and enumerates some statistics about it (e.g. time to load, size, etc.).  What if your goal is to monitor and measure statistics about a very specific element within your application (say, a custom .js library you ship with your app)?

needle-in-a-haystack

That’s one way of finding it…

For that purpose, we’ll use cat and js again to filter our results down to one, specific file/element’s results.  Try adding something like this to your script:

cat results.json | jq '.data.runs."1".firstView.requests[] | select(.url | contains(" <a unique identifying string for your web element> "))' > stats.json

This is similar to how we filtered our results to obtain our waterfall image above.  In this case, each test result from Webpagetest.org contains a JSON array called ‘requests’.  Each item in the array is delineated by the URL for each relative web element.

Our script command above parses the contests of results.json, pulling the ‘results’ array out of the initial test run, then filtering that array based on the ‘url’ key, provided that key matches the string we provided to jq’s ‘contains’ method.

These filtered results are then output to the stats.json file.  This file will contain the specific test result statistics for your specific web element.

Add stats.json to the list of artifacts you wish to archive per test run.  Now save and run your test again.  Note, you may need to experiment with the arguments passed to JQ’s contains method to filter your results based on your specific needs.

Next Steps:

At this point, we should have a Jenkins job that contains a script that will allow us to execute tests against Webpagetest.org’s public API and retrieve and archive test results in a set of files and formats.

This can be handy in of itself but what if you have team or organization members who need access to some or all of this data but do not have access to Jenkins (for security purposes or otherwise)?

Some options to expose this data to other teams could be:

  • Sync your archived elements to a publicly available bucket in Amazon S3
  • Have a 3rd party monitoring service (Datadog, Sumo Logic, etc.) read and report test findings
  • Send out email notifications if some filtered stat within our results exceeds some threshold

There’s quite a few ideas to expand upon for this kind of testing and reporting.  We’ll cover some of these in a future blog post so stay tuned for that.

Private or Testing and Wrap-Up:

If you find this method of performance testing useful but are feeling a bit limited by Webpagetest.org’s restrictions on their public API (or the fact that all test results are made public), it is worth mentioning that if you’re needing something more robust or confidential, you can host your own, private instance of the free, open-source API (see Webpagetest.org’s github project for documentation).

Additionally, Webpagetest.org also has pre-built versions of their app available as Amazon EC2 AMIs for use by anyone with AWS access.  You can find more information on their free AMIs here.

Additionally, here’s the script we’ve put together through this post in its entirety:


npm install -g webpagetest
npm install -g jq-cli-wrapper
npm install -g wget-improved

webpagetest test $SITE -k $APIKEY -r $RUNS &gt; test_exec.json
&lt;span style="font-size: 1rem;"&gt;testId=$(cat test_exec.json | jq .data.testId)
&lt;/span&gt;testId="${testId%\"}"
testId="${testId#\"}"
echo $testId

webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)

while [ "$testStatus" != "$RUNS" ]
do
sleep 10
webpagetest status $testId -o status.json
testStatus=$(cat status.json | jq .data.testsCompleted)
done

waterfallImage=$(cat results.json | jq '.data.runs."1".firstView'.images.waterfall)
waterfallImage="${waterfallImage%\"}"
waterfallImage="${waterfallImage#\"}"
nwget $waterfallImage -O waterfall.png

cat results.json | jq '.data.runs."1".firstView.requests[] | select(.url | contains(" <a unique identifying string for your web element> "))' > stats.json

We hope this article has been helpful in demonstrating how some free tools and services can be used to quickly stand up performance testing for your web applications.  Please check back with our R&D blog soon for future articles on this and other performance related topics.